Things Have Changed

How Brevdev Is Making AI Development Accessible and Simple for Everyone with Nader Kahlil

April 21, 2024 Jed Philippe Tabernero
Things Have Changed
How Brevdev Is Making AI Development Accessible and Simple for Everyone with Nader Kahlil
Show Notes Transcript

When ChatGPT was launched in December 2022, few could have predicted the rapid transformation the world would undergo in the following months. It emerged as the fastest-growing consumer technology in history, catching many by surprise. 

In the year since its launch, ChatGPT has revolutionized nearly every aspect of the tech industry. It enables computers to create articles, finish homework assignments, and generate art, fundamentally altering our understanding of work, creativity, and the very notion of 'search'

As companies explore advanced AI, they often struggle with managing the necessary infrastructure, including the specialized chips that power these technologies, which can slow their progress. Brev, led by Nader Khalil offers a strategic solution that simplifies the complex landscape for companies striving to leverage AI.

Brev’s platform is designed to allow companies to focus on leveraging AI for innovation without the overhead of managing hardware, ensuring they can harness these powerful technologies efficiently.

In the midst of this GPU capacity shortage, the need for more efficient resource management is highlighted. The lack of GPUs or the GPU availabilities out there in the world is not only affecting the largest customers in the world, but also startups and midsize companies that really need access to it.

Today, on Things Have Changed Podcast, we're diving deep with Nader Khalil, Co-Founder and CEO of Brev into how they support the generative AI boom, ensuring that businesses can innovate freely with AI, reducing the trouble of managing high demand GPUs like A100, H100 and others.

Here are some helpful links:

Support the Show.

Things Have Changed

Jed Tabernero:

In February, 2024 alone chat GBT had 1.6 billion visits to its website. And people have not stopped buzzing about generative AI. Basically computers, writing articles, doing homework. Or creating art. It's kind of reshaping how we think about work. Creativity. And even search. As companies explore advanced AI, they often struggle with managing the necessary infrastructure. Including the specialized chips that power, these technologies. Which can slow their progress.

Breadth led by natter Kahleel.

Jed Tabernero:

Offers a strategic solution that simplifies the complex landscape for companies striving to leverage AI.

Nader Khalil:

brev is a dev tool that makes it really easy to use GPUs.. We take the hardware requirements that's needed. We take the software that needs to get installed on top of that as well. And we can create essentially a one click deploy button for any of these.

Today on things have changed podcast. We're going to talk to natter CEO and co-founder of breath. About how he's planning on making GPU's. Easy to use.

Nader Khalil:

I think that, there's a lot of people doing some really great work and. My perspective is okay, how can we work with them and really just pave an epic road for some users

Shikher Bhandary:

This whole wave of AI, wave of LLMs and using a lot more specialized compute, you really need a good foundation to even begin. Companies and startups are so desperate for this hardware. The lack of GPUs or the GPU availabilities out there in the world Is not only affecting the largest customers in the world, but also startups and midsize companies that really need access to it So today we are super excited to have nadir He is one of the first five founders on things have changed podcast and we've come a long way since then And welcome back to things have changed again.

Nader Khalil:

Yeah, thanks for having me. Compute is definitely a bottleneck. Now, it used to be something where when you finished your development process, you can then burst to the cloud to actually deploy your thing and serve it to users. But now you actually need to use the cloud at the earliest part of the development lifecycle and not just that, but everyone's trying to figure out how to roll AI into their stack. And yeah, there's never been more of a bottleneck.

Shikher Bhandary:

Yeah, so this could be like a very brief overview for Our audience who might have heard this whole AI boom, they've probably seen Nvidia and c, NBC is covering Nvidia 24 hours a day, right? So they probably know this stock is up 500000%. But outside of all that, like what have you seen over the past year and how is that fed into what you are building right now?

Nader Khalil:

Yeah, I think generative AI was, I think, a catalyst for a lot of folks to try to figure out how to take advantage of the next wave of technology. I think typically you see with every new wave of technology, there's a new cohort of startups that kind of enter the space. I think what's been very unique about now is there's pressure from the largest companies and the boards of the largest companies to figure out what, everyone's wondering what's their AI strategy, what's their AI strategy. And this way feels very unique in that everyone startups, big companies, governments are all trying to figure out what's going on. How do you leverage it? How do you take advantage of it? And yeah, it's been super exciting.

Jed Tabernero:

We're talking about apps like chat, GPT and Gemini that. Gosh, my mother has even tried chat GPT, dude. So that's how universal kind of generative AI has been. There's more than a hundred million people who use chat GPT on a weekly basis. So when you talk about this wave and the changing priorities of companies even at these massive tech startup levels, what got you thinking about this solution of BrevDev and how did that bridge the gap between what you're understanding, what was going on? In the current space and how you could provide value to these folks, developing engineers and your space in specific.

Nader Khalil:

I think coming up with ideas and trying to solve them typically doesn't work. Ideas are probably shit until the market gets to help shape it. And so what we had actually done initially, Brev, which happened around November, 2020. We were just trying to make it so that when we were scaling our previous startup, dealing entirely with infrastructure issues, we weren't able to talk to our stakeholders and build things that they wanted. So the initial goal, the kind of naive goal for Brev was, Hey, let's build a infrastructure tool that essentially took that burden away because it didn't have anything to do with our stakeholders. And so we initially leaned into serverless. We said, Hey, serverless had a lot of promise. What if what if we could build a serverless platform where you're not having to worry about this at all? What we learned in doing that is that if you go to serverless because you don't want to worry about servers. But you quickly find yourself, or you don't want to manage servers rather, but you quickly find yourself managing serverless and all the different constraints that come with that, the different runtimes, the timeouts, all these things that you're patching. And so we pivoted from there and said, Hey, what if we built a platform that made it really easy to use servers? It's not serverless. It's just the easiest way to use servers. And that found us accidentally building cloud dev environments. The idea is like, Hey, you can just code in the cloud because it's now very simple to do but that really lacked an inflection point where we were still struggling with go to market. We had some users using us. We didn't know why they were using us. And then one of our users was a CTO of an AI company. And he was just like, Hey, I'm using you for all of my CPU development. Could I use you for my GPU development as well? This happened in like June, July, 2022. And we looked into the problem. I emailed a bunch of so we were YC in our previous company. I emailed every YC company that was that said that they were AI. And this is before chat GPT. So that wasn't as large of every cohort. So if a company had labeled themselves as AI, they were probably training their own models and validated this, the problem space. And then we just leaned in and we realized making a dev tool that makes it really easy to use GPUs is actually, it makes more sense. The pain of using, of utilizing one is much harder. The the complexities much harder and especially now with chat GPT being a big catalyst and everyone trying to figure out their AI strategy, there are more people that don't have experience provisioning cloud infrastructure that are, application developers that are trying to figure out. How to fine tune and train models and use these cloud resources.

Jed Tabernero:

has that been for for startups? Cause what I'm noticing these days is that obviously we have these massive applications like chat GPT, who probably have reserved for years in advance of special relationships that give them capacity prioritization that no other startup has. I feel like. Startups these days were developing things in the AI and ML space or having a tough time with it. Are you seeing a lot of those people come to you for help? How

Nader Khalil:

Yeah, absolutely. And there's like research institutions, universities, they'll come to us for their computational infrastructure. I think the way I look at it is if a cloud has the capacity or the capability to build a large cluster, the way, for example, if Azure gets new GPUs, they will give them to open AI, which needs them then they'll do and so the way that startups are typically getting GPUs is by working with smaller data centers and cloud providers that have a healthy access to GPUs. And it's not a cluster that OpenAI is going to get access to, but it's just sufficient supply. And so there are a couple of companies doing great work in the space, and they're starting to service other larger startups as well. So there's like Lambda GPU Cloud, Crusoe Cloud, which we work with. Dean Nelson from Cato. Digital they take GPUs and find excess capacity in data centers, and they'll surface those as well. There's decentralized GPU marketplaces like AkashCloud, which we're working on an integration for. There are ways for startups and teams to still get access to GPUs. It's just just a little trickier.

Shikher Bhandary:

it. So that's where your platform, the relationships that you're building with the different smaller data centers, as well as having the optionality to, for a customer to be able to tap into whatever AWS or GCP. Potentially can provide you, but you also have a deeper pool by accessing all these different entities and not just the hyperscalers.

Nader Khalil:

Yeah, absolutely. And yeah, from a high level brev is a dev tool that makes it really easy to use GPUs. So it's not just it's once you get a GPU, it's then also making sure that you're using it properly getting everything set up and using it efficiently. Like you don't need an A100 or an H100. There are other GPUs that have a lot of GPU memory. There are things like the L40Ss, which we're working with Crusoe cloud to surface. And so a lot of that know how is something that's just built into our product. And so you can start from one of our guides, like fine tuning or training Mistral. Cool. And so what Brev does is we take the hardware requirements that's needed. We take the software that needs to get installed on top of that as well. And we can create essentially a one click deploy button for any of these. And so that will go and provision the GPU from one of the data centers or clouds that we work with, and then set up all the software and just put you in it. And so the idea is not just that you're not worrying about the shortage, but also about getting the GPU itself set up, the CUDA drivers, or even packaging it and bundling it once you're ready for inference.

Jed Tabernero:

much time do people really spend setting up GPUs? How much time do they spend on infrastructure stuff that might make it Interesting for us outside of the industry to see okay That's actually a huge value prop, right? Because all of a sudden now you don't have to spend time thinking about how to set up the GPU. You can spend more time thinking about how to make your model better. You can spend time doing the stuff that you really want. So if you could help us, the folks who aren't super familiar and not smart enough to understand this space a little better, that'd be great, man.

Nader Khalil:

Your CPU is essentially, is what's doing the computations on your computer. It's just like the brain on your computer. GPUs are really unique in that they're very good at doing matrix math. This was specifically designed for graphics, for video game graphics. When you're moving pixels around on a screen, matrix math helps do that efficiently. And then what was where, what was really great is that's also very useful for AI. And a few years ago, while AI wasn't the hottest thing NVIDIA chips were really being used there. And that's why NVIDIA has such a lead on, on other companies is they were just they already had been focusing on graphics. That was really useful for AI and no one was really looking at the space yet. And as far as like getting the GPU set up, a lot of application developers are typically writing at a much higher level in the code base. You're not dealing with actual hardware GPUs. You're actually, you're compiling things down to CUDA kernels. You're putting things onto the hardware and that it's not so much that it's it, the complexity is just very different. And so that's where I think as more people are getting into AI development. There the, essentially the pie is growing of who's trying to use GPUs and they don't typically have experience going down to these lower level, like the lower level hardware. And so that's where, a tool can work really well. There's also more to it than just setting up the GPU. It's also understanding which GPU you need. GPUs have GPU memory. That's typically the bottleneck. The GPU memory is essentially the amount of a model or amount of your data that it can essentially go through at any given time. And so if you have more GPU memory, you can do things faster. There's also different things like FP8s, which certain GPUs have and some do not. And it's harder to take to really leverage these things. So there's different capabilities in a different hardware. And that's where a tool like brev becomes really handy because we can essentially put together templates that are leveraging the capabilities on different GPUs without having to make a user actually have to think about what what hardware they're using, unless they want to, in which case you can just spin one up.

Jed Tabernero:

Makes a lot of sense. And I think I really liked the quote that you have is it enables builders to focus on what they're building rather than what they're building it on. I really liked that quote just because it helped me at least understand okay, it's an optimization tool. You're optimizing for all of those things that we don't have to think about as builders. Right.

Nader Khalil:

Yeah. And also I think something that's. It's often forgotten about AI ML workflows. If you're fine tuning or training a model, it's possible to give it too much training data. And now it's overfitting to something that you trained it on. It's a lot of this stuff is actually it's like cooking, you're putting a little bit of salt and some pepper and you're trying things out. And so the goal isn't that there's this one click that's going to make it work. The goal is that you give someone a tool that makes it so that they can iterate very quickly. And so you're going to, as an organization, have data sets that you didn't think about training with or data sets that you want to try and see if you get better results from a model, you might be able to take a much smaller model that's way cheaper, but give it really good data set and you get way better results than even GPT 4 And so the goal for brev is like, how can you give someone a tool that gets them to be able to be comfortable making these rapid iterations? Because that's the dev flow, right? Is how can you go from like fine tuning and trying something out to then actually running inference on that model, testing it out, seeing if it, how it's performing. It's not a, it's not like a task that once it's checked, it's done.

Shikher Bhandary:

You mentioned iterations, Nader, and we've read the news where it's freaking so expensive because obviously the supply demand mismatch, it's so expensive to get access to any of this hardware, right? This really sophisticated, specialized GPUs and the compute. I know that's one of the big features within your platform where there is a greater focus on understanding the costing of the product that you will use and not just understanding it, but also. Helping you optimize it for your success, right? So can you talk about that a bit more about how that feature came through? Was it just using some GPUs from your different vendors and realizing, hang on, I have to pay them like 15, 000 for six, six hours of work. So like, how did that kind of come through to you? What

Nader Khalil:

I think anyone who's spun up cloud resources has dealt with the pain of leaving them on and then seeing seeing that bill and also just trying to, usually as a developer, you're not thinking about the work that you're doing is being metered, but suddenly when you're using cloud resources, it is every second that you're in your code editor or that you're in your Jupyter notebook. You're actually paying and that doesn't feel as good. And so we're working on a few things here to try to make this easier. So we're going to be implementing soon automatically stopping instances that are not being utilized. First step is actually showing you your utilization. There's another thing is you might not be utilizing your compute. So even now with brev. You can start on a cheaper GPU or even a CPU and just move into a GPU instance when you're ready. And so giving you flexibility on the compute for what you're running is an important step. But there's definitely a lot more to come here. It's a hard problem. Definitely one of the main discomforts with starting to burst out to the cloud, something really funny, actually. So Dean Nelson, the CEO of Cato digital. I talked to him yesterday. We to your point it's expensive and people might just want to actually explore you playing around with the GPU, especially in person, actually see it instead of just interacting with it through a cloud. So in our office, we have a garage. We we're getting eight T4s that we're going to hook up into the garage. So they're actually we're picking them up right now. We have two members from the team going down to San Jose to get the server rack. We're going to plug them into the garage and Invite folks over to just hack it, hack on it. You can use it for free. If you're coming over, just plug in and and let it run. So it'll be a very local compute cluster. The most local cloud, very limited, obviously, but that might be also a nice way for you to actually take a look at the GPUs that you're running and get more physical experience with this thing. Yeah.

Shikher Bhandary:

customers that you are actually speaking to? Is it more like the startups and the midsize companies that are now using PrevDev? to get access to resources.

Nader Khalil:

So access is one issue, but most people aren't using us for access. They're using us for more of the tool of once you have access, what do you do? There is, we definitely, we have access to compute. Brev works with any compute source too. So if you can actually connect your AWS account. If you have quota, you can connect your Azure or Google cloud account and still use the same tool, but there. And so it is a lot about Getting someone in an instance that's properly set up to do the task at hand. And so we're seeing kind of everything. And I think that's as a seed stage startup, that kind of makes it harder as we try to hone in on our ICP as we see researchers at institutions. I'm actually heading to Georgia tech on Monday. We're going to be there all week. We were meeting with a bunch of labs. I don't know if you saw, but they just created a hacker space with NVIDIA. NVIDIA donated a bunch of GPUs to to Atlanta.

Shikher Bhandary:

Interesting. Yeah. Yeah.

Nader Khalil:

see the researchers at institutions. We see startups across the board employees at larger startups. We see a founding teams starting to use brev. There's been large companies in europe where a team of data scientists have started using brev ctos of public companies kicking the tires and seeing if there's a tool that they can use internally yeah, we're seeing a lot of usage with a lot of different profiles

Jed Tabernero:

It's not just the startups who are looking for space. Now you're telling me like all across the board, folks are interested in using the product question for you. How do they know about it? How do they know about all of this stuff that you're putting out? All the cool things that you guys are doing. I've seen some really great stuff on YouTube. I've tried to follow, but again, I'm not in the space. So I don't understand shit guys, but it seems like it's really helpful stuff, right? It seems like it's really helpful stuff. We see you on X. Putting stuff out there. Are these the main ways you guys get customers? Your number is out there. So I wouldn't be surprised. People are calling you every day.

Nader Khalil:

Yeah, no my phone's definitely. Always buzzing And yeah, I have my phone number on the doc so you can text us and i'll reply But we yeah we pretty much what we're doing is we're just trying to make the best tool possible. And so we keep coming, we keep talking to users and building new things that, that do help us hone in on PMF. And as we build new things, we just make a quick video. Hey, this is what we released. This is why we think it's great. And so we put that out on Twitter X and LinkedIn, and then we put out guides that show how to use brev, how to, fine tune Mistral, like you said. It's funny. There are all these new models coming out, but one of our most popular guides is still to this day, llama two. And I think it goes to show that while there are a lot of these new guides or new models coming out, it's, it gets a little noisy when everyone's trying to freak out and try to run the model. But actually, if you think about the use cases, it's the quality of your data, that's going to dictate the quality of your results if you're fine tuning your training. And so that's pretty much it. We don't do much else for like marketing. Okay. There's that word of mouth.

Shikher Bhandary:

That's awesome. Nader, I wanted to ask having been in the, like the semiconductor industry, right? Usually you have these massive gaps of supply and then, ultimately just because there's so much demand, the supply kind of meets that. So what happens say, a few months from now when. people can get more access to H100s. I think the product, you mentioned a ton of features on the product side, which is still super relevant regardless of the compute they're running, correct?

Nader Khalil:

Yeah, absolutely. Once, a lot of our users will also connect their own clouds and still use brev. So brev does two things. It's like a cloud orchestration tool and a consumer UI on top of it. And the UI helps with actually using the compute that's available. The cloud orchestration tool helps provision the GPUs. Or any compute people use this for CPU still as well, especially if you're doing like data processing or collection, you don't need a GPU running for that. So it is just to, the goal is just to make a much simpler cloud experience for these AI ML workflows.

Jed Tabernero:

One of the significant things that we saw and what prompted us at least to reach out is that acquisition you did recently with Agora labs. Congrats on that. First of all.

Shikher Bhandary:

Hey, congrats. Hey, congrats.

Jed Tabernero:

Secondly, dude, it's, it seems like they're in the ML ops space. We did a little bit of research on just Agora labs in general. It seems like y'all get along. We saw the videos the vibe seems immaculate for both teams. Talk to us a little about that, man. What was the decision making around that? And, how did it help you guys evolve as a company as well to have folks from Agora Labs within BroDev.

Nader Khalil:

It was also, yeah, it's funny that video, everyone on the Gora labs is above six feet. I'm six feet tall. So I, and I look like a munchkin in that video.

Jed Tabernero:

That's the first thing I thought of. I was like, this dude's already six two. How tall are these guys?

Nader Khalil:

yeah, it's a, yeah the team is a very tall team. No. But yeah, when, so I saw the CTO of Agora labs making a, make a brev account. And I remember, first time founder, I used to be very afraid of competition. You see someone make an account like, oh, what are they doing? Why are they doing this? But it's actually, I think everyone should just get closer to their competition. They're working on a similar problem set you guys. There's probably also ways for you guys to do some level of competition. Where there's ways you guys work together while you're competitive in other regards. Yeah, we, we reached out and the first thing I noticed, the energy was just fantastic. These guys are really smart. They're they're really high energy. It was just like a clear fit. Like I wanted to be on a team with them. It was actually on the first call that I just floated the idea by. And they were working, they had been surfacing compute from that decentralized cloud Akash. networks where on Akash, there's a really healthy supply of A100s and H100s available at some like really great rates. And what we did initially was just say, Hey, why don't you guys provide us. The Akash integration as an SDK, and that was a really nice way for us to get to feel what it was like working with them. And it was so funny on the first call with them two, two of our team members. So we were a team of four Carter and Tyler both came up to me and said, Hey get these guys at whatever, whatever it takes, let's get these guys on. They're brilliant and that's definitely been the experience. It's been really fun working with everybody. Anish, who's the one on the left most part of the video, he he's been leading go to market with me. It's been really great being able to just strategize with him and both of us hitting the ground running. Ishan is probably the smartest AI mind I've met. It's been really great seeing him apply himself. There had been times where we were like stuck putting together some like NVIDIA resources. It was taking us like two weeks with back and forth with their engineers. Everyone was. Their engineers were stuck. We were stuck. And then on the first Saturday that Ishan came to SF, he finished it in half a day. Like I still don't understand how that happened. And then we were able to have the deliverable. Yeah he's brilliant. And Tom just feels like we cloned Alec, our CTO. Both of them just, they have their desks right next to each other and they're just going ham. So the entire team feels like it elevated. What I've noticed with with teams, every time you shift when, whether you add folks or remove folks, what you're really doing is you're you're expressing with like very direct action. What what your team cares about, the folks that you bring on, you brought them on for particular reasons. The folks that you let go for particular reasons. And being able to be very clear Hey, these are qualities that our team views as important. It's actually a really, it's really nice to see the entire team step up. And what you're really doing is every time you shift your team, it's an opportunity to just raise the bar and then everyone steps up to the bar. And so that's yeah, it's been feeling really good. We have so many things planned for like just launches coming and everything. So it's yeah, it's going to be, it's gonna be really exciting. Next couple of months,

Jed Tabernero:

That actually leads me to my next point, which is what we talked about prior to the call, dude. I love the culture. I love the culture. I love your phone being on the website. I know we discussed that a little bit but people don't understand how much of a big deal that is, right? Cause a lot of people who lead these organizations are too busy to be dealing with, certain customer problems, but you put yourself in front of that, right? By being the available person to say, Hey, listen, if you have a problem with our product, I can call me. So that, that, that's a really awesome culture to have. I think I want to ask a little bit about how you maintain that culture. And I'm interested in learning how you're thinking about that and how you're thinking about when you're hiring people, how does this person fit into BrevDev?

Nader Khalil:

there's a few things. So one, a lot of people sometimes are afraid of bad news. It's usually not the most fun to hear, but if you really approach it from the perspective of there is no bad news, there's just the current state and the desired state and then the Delta. And then you have to just find the way to get to the, to cross the Delta. It's very easy for us to. Impose or to make things seem like larger tasks than they are. And in dealing with bad news is one of them just like making small tasks that we could just do without emotion or without making it personal, very like natural and human to make it a very personal thing. Like I built this thing and it's not working kind of thing. That's I think the first one. As we bring on new folks, I've seen this before, where as you bring in someone and a lot of, for example, technical founders will say Oh, I like, I need to bring a salesperson that's going to help us get sales. It's this idea that by bringing some bringing talent in, you can start to take a backseat and go into a managerial role. But everyone needs to just do stuff like, you're either building or you're selling or probably both. And honestly, everyone on the team is technical. At the end of the day, people just mimic what they see and they see everyone just fully applying themselves as opposed to just taking step backs and doing like high level strategy and managerial work, then that encourages more people to just really lean in and step in. And yeah, I feel like the office feels carbonated. Like you open the front door, it's and so the energy is amazing. Everyone's just really fully applying themselves.

Shikher Bhandary:

That's awesome to hear.

Jed Tabernero:

Man, you probably think in terms of what is happening for the next launch. And so it becomes really nice to be able to focus on something. When you put your heads down, you've got your own space. Now that's awesome that you have this team. I wanted to ask what's next, man. What are you guys working towards as the next launch, the next step, the next big milestone that you guys are working towards now that you have a little bit of a larger team, you maybe have expanded a little bit more expertise. The space is looking quite promising. What's next, dude.

Nader Khalil:

Yeah. Most immediate thing that we're focusing on right now is one connecting link. A few different compute sources like a cache cloud. So that'll go live probably next week. And the main one is connecting or essentially closing the dev loop so that we can make it like a really tight developer feedback loop which allows for faster iterations. I won't share too much yet. Really excited for that launch and it should happen probably in the next two, three weeks. That's something that we're really excited about. Cause if we do get a tight dev loop, that ends up solving a really strong pain point. We've been doing some great work with Nvidia. Also probably next week or two, we're going to announce the, like our partnership, which is really Nvidia has a catalog of AI software and NGC. You can see containers, models, weights, a whole a bunch of stuff that you can run. And I mentioned earlier, Brev makes these one click deploys where we can take the hardware that something needs to run on and properly set up the software for it. And so we're working with Nvidia to make these one click deploy buttons. They're actually, they're live now. But we're going to like formally launch it probably next week in the week, next week or two, but you can use it now. And what's been great too, is Nvidia really wants to reduce the friction to getting to run these. So they'll even cover the GPU costs for the first hour or two. So you can actually like fine tune Mistral on an A100 and you can get started for free. So that's super exciting. We're going to announce that very soon.

Shikher Bhandary:

Great incentive.

Nader Khalil:

Yeah. Yeah.

Jed Tabernero:

that's great, dude. That's awesome. It's a big step. Number one. Number two is the development of your partnerships with a huge company like this, that is we've already mentioned it throughout the call, right? NVIDIA is one of the most important companies in this space, if not the most important company in the space and you're working with them.

Nader Khalil:

I think partnerships is definitely a big angle for us. We, if you think about, Brevis being the easy button for running a lot of these services and tools, we don't want to build everything. We want to see what people want to do and just pave that path. And we're working with Hugging Face. We're trying to work with a bunch of really great companies in the space any scale replicate. I think that, there's a lot of people doing some really great work and. My perspective is okay, how can we work with them and really just pave an epic road for some users?

Jed Tabernero:

No I want to give this opportunity, man, because we typically do this at the end of the show, and I think you started this off actually, which is that we give you guys a few minutes to say what you, Would like to communicate to our audience. Now our audience has evolved since you were last on the show. A lot of the folks who take a listener, folks in the same space, their founders as well, the people who work in tech startups, and so we'd love to just get, your last thoughts. If you want to give a shout out to your team, to your partners, to the people in the space and maybe give a little plug about how people can get more involved, I think that'd be awesome.

Nader Khalil:

Yeah. Our goal is just to make the easy button, make it really simple for folks to get started with fine tuning, training, deploying their AI models. And so if you have any idea of how to help me there's I, we have our roadmap, we have our hypotheses and partners that we're looking to that we have access to that we're saying like, Hey, what can we do together? And so if you have an idea of something that we could do together or something that we're missing. Something that would provide a lot of value to users. Please reach out, message us in our discord, shoot me a text. You'll see that in our docs my phone number there. Yeah. Just get in touch in any way we were, we're constantly looking to see what we're overlooking and what we're missing, what we can do for users. And if you have yeah, we welcome any ideas. The discord is probably the best way to talk to us and users directly.

Jed Tabernero:

Really appreciate you coming on the show again. And I've learned so much just doing research of the stuff that you guys do. You honestly, just myself, it forced me also to talk to Karam, which I haven't talked to in a while. He's our friend. Who's also a software developers is deep in the space now. And yeah, I think a lot of the stuff that we discussed today are going to be irrelevant for the next few years. In general, we love seeing companies like you, dude, who are in this space. We're not afraid of these giants who are even working with them and finding ways to, optimize their workflows together. That's really dope. I think it's fascinating that the customer obsession is at that level, that the CEO cares about these problems. So dude, kudos to you. Kudos to the recent acquisition. Kudos to the growth, to the hard work that you've been putting into this kind of space. And yeah, we love to see you grow, dude. So congratulations. And hopefully we have you on the show again in the next few years and we see where Brevis has come to.

Nader Khalil:

Yeah. No, I really appreciate all the kind words and thank you. Thank you guys for supporting us and for having me on again. I look forward to talking to you in three years.

Shikher Bhandary:

Great. Yeah. Thanks a ton. This was fun.

Thanks for tuning into today's episode. We hope you found our discussion with natter enlightening. And that it sparked some ideas about the impact of streamlined technology infrastructure on your own projects. For more insights and episodes. Don't forget to subscribe to things. Have changed podcast on your favorite platform. Until next time, stay curious.

Jed Tabernero:

The information and opinions expressed in this episode are for informational purposes only. And are not intended as financial investment or professional advice. Always consult with a qualified professional before making any decisions based on the concept provided. Neither the podcast, nor is creators are responsible for any actions taken as a result of listening to this episode.