Stas Kelvich: life after acquisition

On the Dev Propulsion Labs podcast,
Cover for Stas Kelvich: life after acquisition

In this episode of Dev Propulsion Labs, Stas Kelvich, co-founder of Neon and member of technical staff at Databricks, shares his physics background and what led him to become a key Postgres contributor.

He also explains how the $1B Databricks acquisition came together in just 30 days and what’s happened to Neon after that. Plus, Stas reflects on how the agent wave changed Neon’s infrastructure demands overnight and what the future of databases looks like in an AI-first world.

Watch the full video on YouTube.

Transcript:

[00:00:00] Victoria Melnikova: Hi everyone. Welcome to Dev Propulsion Labs, our podcast about the business of developer tools. My name is Victoria Melnikova. I’m the head of new business at Evil Martians, and today I’m excited to introduce my guest, Stas Kelvich, co-founder at Neon and a member of technical staff at Databricks. Hi Stas.
[00:00:25] Stas Kelvich: Nice meeting you, Victoria. Thank you for having me.
[00:00:29] Victoria Melnikova: Welcome to the podcast. I just want to start with a simple question — who is Stas? The reason I ask is because I know you have a scientific background, but you’re also an engineer, and you’re also in the startup world. So who are you today?
[00:00:49] Stas Kelvich: Yeah, most of my time I’m an engineer, and to some extent an engineering manager for our teams that work on Lakehouse and Neon. Mostly I focus on everything that surrounds the database runtime — auto scaling, connection management, and things around that. Across the years with Neon, I worked on pretty much everything and hired the initial team there.
Most of engineering reported to me up until we were around 40 or 50 people. Then we hired a VP of Engineering and I started concentrating on about half of our stack. I guess I’m mostly working on whatever is on fire today.
[00:01:42] Victoria Melnikova: So you started as a physicist and you did quantum mechanics, right? Or what was your specialty?
[00:01:48] Stas Kelvich: Yeah, so I’m trained as a physicist and I spent some time doing research. I finished university, did some startups, then did some research. I thought that software engineering is actually more fun — I liked it more, it’s more dynamic, and it’s easier to find something where you feel useful. Quantum mechanics is broad, but yeah, it falls under quantum field theory, or the more common term, high energy physics.
[00:02:26] Victoria Melnikova: Interesting. So it’s like atomic things?
[00:02:30] Stas Kelvich: Particularly what I was working on is strong laser fields. You take strong laser fields and target them into some gas — it’s a pretty involved way to measure time with high precision. If you want to measure time down to the attosecond — attoseconds are more the dreamy side, femtoseconds are more realistic.
[00:03:02] Victoria Melnikova: So you were in academia doing research and teaching, right? And how did you make that bridge to going fully into software engineering and Postgres?
[00:03:14] Stas Kelvich: I was always doing both, so there was no big gap for me. I was doing web development since I was in school — I was doing both things. For me, software engineering is just more fun. It’s a bit harder in academia, particularly in theoretical physics, to find really good problems to work on. Most of the easy problems are already solved. It’s an old area where there hasn’t been a gold rush era for a long time. There might be a gold rush in biology and a bunch of other fields, but theoretical physics is harder in that sense. You’re either working on something that’s not necessarily useful, or working on something that might take more than your lifetime to solve — and neither feels that rewarding.
[00:04:23] Victoria Melnikova: Interesting. So you’re very problem-oriented, right? You like a challenge to solve.
[00:04:32] Stas Kelvich: I don’t think I’m super competitive, but I like to do stuff — to get things done, whatever it takes. Could be a software engineering problem, could be a math problem.
[00:04:45] Victoria Melnikova: So you were deeply involved in Postgres, right? In the internals — I don’t even know how to explain it because I’m not very well versed in it. Can you briefly describe what you did? How did you get involved with Postgres and what were your early tasks?
[00:05:10] Stas Kelvich: It started with a problem. I needed to build search for a prepackaged travel tools engine where you can buy things online and you have a bunch of search criteria — dates, destinations, a lot of dimensions you want to search on. The usual indices don’t work well for that. They allow you to search on one dimension, but here you need to search across many dimensions. I tried Postgres for that and it wasn’t working well, so I looked into why and fixed it. That’s basically how it started. The Postgres community is pretty welcoming. There are some quality criteria you need to adhere to, but otherwise people are pretty open.
[00:06:07] Victoria Melnikova: I also did some research and saw that you did Ruby as well. Did you program in Ruby?
[00:06:12] Stas Kelvich: Oh yeah. My early startup experience was Ruby on Rails. I was a user from Ruby on Rails version one, so around 2007. Back then it had a pretty simple directory structure for a project. We started with MySQL first, then eventually switched to Postgres for most of our installations. I’ve been a user since Ruby on Rails version one through to version four. That’s still probably my most comfortable programming environment to this day.
[00:06:57] Victoria Melnikova: What made you stick with databases? Why didn’t you go do something else in engineering? What was so interesting about databases?
[00:07:06] Stas Kelvich: When you work on Postgres, there’s just a massive user base already there. I also genuinely like databases as a topic. The whole field has a bigger-than-average institutional memory. Most people who work on databases would know papers from the 60s, 70s, and 80s, because that work is still useful and relevant. In other domains, people don’t need that background because things change too fast. Databases are so heavily optimized for fundamentals — the architecture is mostly driven by hardware and general concepts like what is the latency of my disks, is it rotating media, what are the cache sizes, durability requirements. When you apply those constraints, the books from the 70s and 80s — the IBM systems work — still hold up. At a block diagram level, it looks the same today for most major databases. You’re just optimizing on top of that.
That’s why I love databases as a field — there’s a lot to learn. And when you contribute to Postgres itself, you do something — some small feature — and now people across the world are using it. People write emails from projects you didn’t even know existed. I worked on a nearest neighbor search feature for Postgres around the Obama election cycle, and people started reaching out saying, “We’re doing a voter identification project where you answer questions and we try to pick the right candidate for you — we saw your patch, it wasn’t committed, can you rebase it to the new version?” You see that what you’re doing is useful, versus working on projects that have some uncertain probability of success where you’re always discounting by that probability.
[00:09:47] Victoria Melnikova: I’ve had a few Postgres database founders on my podcast and I noticed the community is really responsive. There’s always a lot of debate going on, and I’m sure as a co-founder of Neon you see that side too. It can go dark, but it’s a very lively community where people are vocal — especially when comparing Postgres to something else. People have a lot of opinions, which is exciting, but you also have to balance your product well and serve the majority. So when Neon started as an idea, was the offering ready? Did you do a lot of engineering work to make Neon production-ready, or did you do a lot of preparation to make it stand out?
[00:10:50] Stas Kelvich: Yeah, it was about a year of work before we opened it up as a preview to the general public — it was even under invite code initially. It took about a year to do the basics because from the start, we wanted to make some pretty big changes to the storage subsystem. We wanted to separate storage from Postgres in a way that still kept full Postgres compatibility. It took time before we could start passing all the tests and at least be correct and not lose your data. Once those basics were done, we could start onboarding users, and the growth since then has been pretty nice.
[00:11:51] Victoria Melnikova: Can you walk us through the early days of Neon? How did it start? Were you friends with the other co-founders and came together to build something, or did someone have an idea and recruit you from different corners of the world?
[00:12:09] Stas Kelvich: It was Nikita’s idea to start a company that would do separation of storage and compute for Postgres. Nikita’s background — he worked on a different database, SingleStore, before, which was more on the analytical side of the spectrum. Postgres is more transactional, high-frequency transactions. He had this idea, and he was also working at an investment fund at the time, so he started sourcing people. After a few conversations, people recommended me to him — I didn’t know him personally before. I’d heard of him, and similarly he reached out to Heikki. Heikki and I knew each other from conferences and had some chats. It started feeling real after that. Initially, Nikita didn’t want to commit his full time to Neon, so it was more of an incubation, but then he got more and more involved and eventually switched to being full-time CEO.
[00:13:24] Victoria Melnikova: That’s very interesting. So you were brought in as the engineering lead for the group?
[00:13:31] Stas Kelvich: Me and Heikki, yeah. But for a long time in that kind of project, you don’t really need anyone non-technical.
[00:13:42] Victoria Melnikova: Can you reflect back on some of the early decisions you made as the Neon co-founder team that really made a difference in how Neon reached its first customers? Were there any moments when you had to sit in a room together and make a bet, go all in on something? The separation of Postgres and storage is one that Nikita brought early on. Were there any others — like deciding on a free tier, or going open source?
[00:14:26] Stas Kelvich: The core idea was: let’s build cloud-native Postgres, and that’s extremely broad. Separating storage and compute is one option — you could do it or avoid it. There were discussions about whether we should build the cloud ourselves or concentrate on the database parts where we had more expertise. That was one of the key bets: let’s actually build the cloud ourselves. You could build software that other vendors host, but we decided that’s just a bad business model. You need to own your own cloud end to end.
Then separating storage and compute was also a costly bet, because now you need a big engineering team. You’re doing hard stuff. If you just take Postgres as-is and focus on the cloud layer, most of your engineers can work on cloud infrastructure. But once you’re heavily modifying the engine, you have a cloud team and a core engine team — those engineers are more expensive, and there are more of them. Our whole cloud team was maybe three people at some point, while the engine side was closer to 10 or 15. There are competitors in the market, so it’s a real bet in terms of burn rate and technology investment. But in exchange you get a much stronger differentiator and something more appealing on the enterprise consumption side.
[00:16:38] Victoria Melnikova: Were you targeting enterprise customers from the start? I know there’s both a bottoms-up approach and top-down enterprise sales. From talking to different customers, I’ve learned a couple of things. First, databases are a very sticky product — companies rarely change databases. So if you’re targeting small and medium businesses, the bet is that they’ll grow and bring a lot of revenue. But enterprise requires a very specific set of capabilities — reliability, stability, and much more. Was the bet always on enterprise from day one?
[00:17:35] Stas Kelvich: Even in our seed round slides, it was marked that we would do it in two steps. First step: bottoms-up, product-led growth, not focusing on enterprise for the first few years. Then go to enterprise — that’s where you get most of the revenue. That plan isn’t super obvious, and a lot of people and investors you talk to ask, “Why not go straight to enterprise?” or “What databases have actually been successful with product-led growth?” MongoDB comes to mind, but that’s more or less it if you’re looking at 2020. MongoDB being successful in PLG was one of the things we studied a lot. And it felt like there was a gap in the market — you could go to AWS and provision Postgres, but that’s too much effort if you just want to start building right now.
[00:18:54] Victoria Melnikova: Interesting. Last year Supabase announced multi-tenant sharding for Postgres. How did you approach that problem? And if you’re focused on enterprise clients, is sharding something only the very biggest clients need?
[00:19:24] Stas Kelvich: We solved that problem by not solving it. We didn’t address sharding at all. We concentrated on making one Postgres database work really well. Sharding is a lot of fun to work on, but the majority of the market — if you slice by revenue — is single databases that are fully compatible with existing applications.
There is some tension at the bigger usage tier. If you have a large SaaS company, you end up setting up a lot of databases. The approach so far is mostly client-side sharding — like YouTube with their bunch of MySQL databases. That’s one way to go. There’s also Spanner, which solved it properly, except that nobody outside of Google uses it. So it’s a business question again. You’d have to invest a lot of money and time, and you’re solving a harder problem because you want to retain Postgres compatibility while working with a 30-year-old codebase with its own design decisions — and the market for that product is smaller.
The previous generation of distributed Postgres-compatible databases — CockroachDB, YugabyteDB — pivoted eventually. Similar MySQL-compatible projects too. It’s tough business. They just couldn’t build enough momentum to become a general-purpose sharded replacement, so they pivoted to specific use cases like data residency. It’s a riskier bet. Now across the industry there’s a lot of Postgres investment being made by all the big cloud players. Let’s see how it goes. My bet on the end state is something Spanner-like — that’s what it would look like if it happens.
[00:23:03] Victoria Melnikova: We’re in 2026, and I’d guess that by now the majority of Neon instances are spun up by agents or some sort of automated code. Is that something you accounted for in the design? Neon started a few years ago, probably ahead of the AI wave at its current scale. Are there any product adjustments you’ve needed to make to accommodate agentic coding workflows?
[00:23:41] Stas Kelvich: Our product philosophy from the start was: make it as straightforward to use as possible for users who aren’t well versed in databases. If you don’t know how to provision a database and don’t know how to answer all the questions AWS would ask you — it should just be simple. For a long time it was only two clicks: sign in with Gmail or GitHub, then press a button to create a database. We also focused a lot on CI/CD — database branching with GitHub integration so it’s easier to test your changes. Those were our big bets for driving growth.
We didn’t plan for agents, but both of those things turned out to help agents a lot. If it’s easier to set up for a junior developer, it’s easier for agents too. We didn’t adapt the product much specifically for agents, but at one point Replit launched a feature with database state snapshotting. Their Replit Agent actually grew a lot of revenue and adoption and broke out from their previous market — people learning to code — into paying customers building real things.
They were probably our first big platform to adopt agentic workflows, and agents create a lot of checkpoints. We had to do a lot of work on our infrastructure because on average each database would have around 500 branches, and would quite often max out — some tiers have a 1,000 limit, others 5,000. That required a lot of work at the metadata and orchestration level, quite different from what human usage looks like.
[00:26:42] Victoria Melnikova: That’s a very interesting topic, and we discuss it a lot internally at Evil Martians. There’s a new emerging service category called Agent Experience — essentially how easy it is for agents to use your tool. When we talk about agent experience, it’s a double-edged sword. On one side you want to create an amazing agent experience and make your tool very easy to use. On the other side, it opens up room for fraud, because agents can also be fraudulent. You’re automating coding workflows, which opens up a lot of surface area for abuse. We see a lot of database incidents being reported, partly because agents are creating a lot of code that can be both fraudulent and that puts a lot of pressure on systems. Is this something you talk about a lot at Neon, and what are you cautious about?
[00:28:11] Stas Kelvich: There are two things here — how you optimize for agents, and what you do with all that usage. In terms of optimizing for agents, we didn’t do that much specifically. There’s some marketing around it, but in terms of engineering, if you think of a junior developer, that’s actually a really good approximation of what an agent does. It makes sense — they’re trained on real data and it’s a different kind of intelligence, not exactly the same as a human, but a human is a pretty good approximation. But instead of having, say, a few dozen million active developers in the world, now imagine each of those developers spinning up a bunch of agents. That’s an enormous amount of consumption that nobody fully accounted for. It’s a big thing across the whole stack — people are building new data centers, new control planes for their services. It’s a lot of consumption, but it’s a good problem to have. Way better than not having enough consumption.
[00:29:56] Victoria Melnikova: Yeah, for sure. Do you see a lot of fraud happening?
[00:30:03] Stas Kelvich: Not at our layer of the stack. We don’t get many abuse requests because a database isn’t something that’s publicly facing. We’ve seen some projects hosted at Neon get into trouble and taken down, but that’s usually handled at the level above us — we’re not distributing the content. Whoever is on top, some other hosting layer, usually deals with that situation.
Also, with branching, you typically inject some amount of data obfuscation by design. The target use case is: you have a production database, you create a branch, and different organizations have different rules around what developers can see. In a lot of places, as a developer you’re not supposed to see production data, so before a branch is created, sensitive data is wiped. There are a lot of levels to this — is masking enough, or do you need to fully wipe it? Some organizations are so security-conscious they don’t allow any data transfer or lineage between environments at all — we just create an empty database, apply the schema, and there’s no data in it. All of that is a tradeoff between how well you can test things and how secure things are. And it applies to agents just as much as it does to humans.
[00:32:00] Victoria Melnikova: Just by looking at a database — I don’t know if you have a dashboard where you see an average database hosted on Neon — can you tell whether it’s agent-powered or not? By the number of rows, or whatever it might be?
[00:32:25] Stas Kelvich: You obviously wouldn’t look inside the database itself. But if you look at the Neon customer or user level, it’s pretty easy to tell if it’s a generic platform or not. If you’re creating 10 databases per second, there just aren’t enough humans in the world to do that at that rate — not even across all the clouds combined. A very high creation and churn ratio is almost certainly agents.
[00:33:06] Victoria Melnikova: I want to talk about the acquisition. It was a big splash in the community when Neon was acquired, given the size of the deal — acquired by Databricks for $1 billion. What does it feel like to go through an acquisition as a co-founder? And let me take a step back — before it happened, was it something in the plans? You had raised close to $130 million. Was there ever a trajectory to just keep going independently?
[00:33:51] Stas Kelvich: You always plan company strategy as: we stay independent for a long time, we’re building a platform for 10-plus years. That was part of the hiring pitch — when someone asked when they’d be able to get liquidity on their shares, the answer was: we’re building something long term, starting from year X we’ll do some internal liquidity events, but the target is a long-term project. At the same time, we were open to what was happening in the market. And once you reach a certain valuation, there aren’t that many buyers who can acquire you anyway.
We crossed that bridge when AI and agents started affecting the plan. There was also the desire to move upmarket into enterprise. You can do that yourself, but it takes a lot of time — you need salespeople, there are iterations, and you’re now operating in a human domain, not an engineering domain. We did make some attempts. The first was an integration with Microsoft, where we became a native offering on Azure, and that was fun. But then you need to do sales enablement, navigate the corporate structure of a huge enterprise, things are moving but not fast, and you’re never going to be a top priority for Microsoft when they have so many other things going on.
So we started looking around for who we could partner with to get into enterprise — there were companies for whom having a Postgres offering and competing with hyperscalers would make a lot of sense. We’d been pitching the integration idea to Databricks for a while, but with agents creating a lot of databases and Neon benefiting from that wave, the conversation about an acquisition started getting more real. For Databricks, they have a big footprint in data lakes — Lakehouse is the current term for that — and in AI usage and model serving. So the combination started making way more sense. We quickly decided: let’s go.
If you stay as a separate player, it takes a long time. Runway, momentum, everything. It’s fun to be a pirate, but if you think about what gives your product the best chance of success, Databricks puts a lot of focus on Lakehouse — their brand for the ongoing enterprise offering. That was a pretty compelling fit. There are always risks with acquisitions — you end up in a big enterprise and your product dies. But that’s not how it looks one year in. It’s actually quite the opposite: we’re getting a lot of investment, growing the team significantly, and the Databricks hiring brand is strong, so it’s easier to attract top talent with budget to match.
The deal closed in about 30 days. For a billion-plus deal, that was quite intense. At some point there were about 90 lawyers working on it.
[00:38:59] Victoria Melnikova: Jesus.
[00:39:00] Stas Kelvich: There’s a lot of regulatory work — antitrust filings and so on. Usually a deal like this takes six months to a year. But everyone was committed to moving fast, so money was thrown at the problem and the number of people working on it was pretty crazy.
What was also interesting — and kind of funny — on the receiving end for Neon employees: we basically said, “Hey folks, we don’t have time to do custom contracts with proper numbers for how many shares you’re getting.” So everyone signed papers with blanks. Because if there’s a typo, you can basically contest the whole deal. A lot of people were worried, but everyone went with it.
[00:39:54] Victoria Melnikova: Thanks to you, I had David Gomes on the podcast. He’s now working at Cursor, but at the time he was working at Neon alongside you. He mentioned something that really stuck with me — that Nikita managed to negotiate the deal in a way that Neon still stands as a brand, which is not common for acquisitions. What does it take for a team to stay independent and maintain a strong product presence after being acquired?
[00:40:28] Stas Kelvich: I think Nikita was able to negotiate that because it actually made sense — not because he’s a great negotiator, but because it genuinely fit. Databricks independently had already been moving toward self-serve consumption. Databricks is a very top-down, sales-driven organization — for a long time you couldn’t even create an account without going through sales, and there was no free tier. But now there is, and that happened even before the Neon acquisition. I think that also contributed to the decision to acquire a company with strong product-led growth.
So there’s a free tier of Databricks and a free tier of Neon — serving slightly different needs, enterprise versus self-serve developers and small development teams. And you also get consumption from big accounts. Quite a lot of S&P 500 companies have people inside with Neon projects created from their work email. There are a lot of benefits to that.
The idea from the start was to unify infrastructure and have two front doors. And there’s also the data replication angle — most logical replication streams out of Postgres end up in some analytical database. People have to configure that manually today. The goal is: can we make it as simple as possible to land that data in Databricks? The vision is: I’m a developer, I built my app, I have my transactional database, and when I hire my first analyst and want to start building dashboards, it would be great to have that bundled in with a strong product, without needing to be opinionated about the destination. Most developers aren’t.
[00:42:55] Victoria Melnikova: How did life change for you after the acquisition? Neon right before the acquisition — it wasn’t huge, right? It hadn’t crossed a hundred people.
[00:43:06] Stas Kelvich: It was around 130.
[00:43:08] Victoria Melnikova: Okay, 130-ish. A comfortable size, and you were one of the co-founders very much in charge. Now after being acquired, how does it feel to be in a bigger team, potentially solving different problems? How has your role personally changed?
[00:43:24] Stas Kelvich: The org structure is pretty similar. People are still reporting to Nikita, and I don’t think it’s changed that much, to be honest. One of the biggest changes is that we’ve gotten some strong people from Databricks working on Neon and Lakehouse, which is a genuinely positive change. Being part of a bigger company does mean some things are harder to roll out, but in exchange you get things like stronger compliance frameworks — you don’t have to build all of that yourself. SOC 2 and so on was already on the roadmap, so having that infrastructure available makes sense.
We haven’t had significant attrition from the acquisition — most of the people who left had external reasons, like family planning or taking a gap year that was already in the works. One person actually retired — we had a great ex-IBM engineer. Most ex-IBM people are around retirement age, and they also tend to know a lot about databases. So it’s largely the same team, just stronger with some folks from Databricks, and we’re pretty excited. With this new momentum, challenging the hyperscalers feels more realistic. Even at Neon standalone you understood it would take 10-plus years. Now the timeline feels shorter. It might happen, might not, but it’s more realistic.
[00:45:38] Victoria Melnikova: Does that mean you’re relieved of some of the mundane tasks you didn’t enjoy as a co-founder? Can you now focus more on technical challenges, or do you still have those operational and leadership things to worry about?
[00:46:02] Stas Kelvich: You still have to worry about all of it if you care about the product. The fact that someone is responsible for something doesn’t mean it’s not your problem. Sure, if you’re playing the org chart game, you can technically pass the problem — someone else is responsible. But at the end of the day it’s your product’s problem. Access to expertise helps. Better hiring helps. Overall, I was working on whatever was on fire in Neon, and that’s still the case in Databricks. I’m not sure how applicable this is to other acquisitions, because here Databricks is putting a lot of emphasis on Lakehouse — it’s unusual for an acquired startup to be one of the top priorities at the acquiring company. But when it happens, it’s pretty good.
[00:47:08] Victoria Melnikova: What’s the long-term strategy for you personally? Do you feel good at Databricks, are you in it for the long haul, or are you in closing-out mode?
[00:47:18] Stas Kelvich: I’m not in closing-out mode. I’ve heard horror stories from people who felt that way after being acquired —
[00:47:32] Victoria Melnikova: — and just wanted to quit right after.
[00:47:33] Stas Kelvich: That didn’t happen to me. It’s working through the same roadmap.
[00:47:41] Victoria Melnikova: Let’s quickly talk about the current state of things. We see this AI wave stabilizing — players have taken chunks of the market and are now working on making things more reliable for enterprise. I’m talking about agent workflows but also all the infrastructure serving AI-assisted coding, and really AI-assisted everything. How do you feel the market has changed since early 2025? Where are we today, and what should early-stage technical founders be thinking about in 2026?
[00:48:42] Stas Kelvich: One trend that’s easier to see: infra consumption is growing. Some business models are going to work, some won’t. People are worried about subscription or per-seat models when now, instead of a hundred people with a hundred seats, you might have one agent occupying one seat. There will be iterations on that. But if you’re consumption-based, you’re on a nicer end of the spectrum. A lot is needed in infra. Agentic observability and security are big areas. I’m probably stating obvious things—
[00:49:40] Victoria Melnikova: Obvious for us in the bubble.
[00:49:41] Stas Kelvich: Right, it’s not obvious to everyone. I was living in Cyprus until 2025, and when you’re outside the bubble it’s very visible as a bubble. Now I’m in the Bay Area, so I’m susceptible to the same brainwaves.
[00:50:03] Victoria Melnikova: Yeah, it’s really hard. Even with OpenClaw it was dominating all my feeds everywhere — even Nikita’s dinner was featuring Peter, so it felt like everyone must know about this. But the bubble is real. A lot is happening here in the Valley and at some point it’ll reach the wider masses, but not all of it will. A lot will stay here. It’s interesting how this will translate to startups and companies across the globe when it settles. What are the things that will stay? MCP doesn’t sound as hot as it did a couple of months ago. Now we’re talking more about agentic workflows, skills, and other things. What will survive and become standard tooling for an average engineer in a couple of years? Do you use AI in your coding?
[00:51:09] Stas Kelvich: I use it a lot. I think there was a really big phase transition somewhere around November or December — and you don’t see that kind of thing that often. Over the past year, it went from mostly not working on my tasks to mostly working on my tasks. You can build pretty big things and run experiments without touching an editor. That’s pretty exciting. Now I think it’s almost a ChatGPT moment for coding — at least for me. ChatGPT was exciting, but it kind of plateaued — small improvements, it could generate some code, but you had to stay in charge. Now that’s changing. I think there will just be more software written overall. I’ve yet to see a development team that said, “Okay, we’ve run out of tasks — now what?” You’re usually constrained by headcount and budget.
More consumption across all areas. I don’t have specific predictions, but the market is changing a lot, which means a lot of opportunities for people who stay in the loop and move quickly. You can grow revenue from zero to a hundred million ARR by doing the right thing at the right time. You don’t have to be extremely smart for that. There are a lot of echo chambers out there, but it’s a gold rush — you can do the right thing at the right time and nobody is going to look at your diploma. It might not even be technically hard. I think there are a lot of opportunities like that ahead. If you’re looking for what to optimize for — just stay in the loop, and if something feels right, experiment. This is a good time to try things.
[00:53:57] Victoria Melnikova: It’s actually kind of hard to stay in the loop because there’s so much going on. But good luck out there — I hope you find something that blows up. And this brings us to my final question, which is always the same. It’s called the Warm S question, and it goes like this: what makes you feel great about what you’re doing today?
[00:54:16] Stas Kelvich: I’m doing something that is useful for other people, and I can feel that feedback loop. A lot of users — whether enterprise users or individual developers — I’m delivering real value, and that’s pretty rewarding. Let’s see what comes next, but in this particular area, there’s a lot of work to do.
[00:54:43] Victoria Melnikova: That comes full circle to where we started this conversation, which is nice.
[00:54:49] Stas Kelvich: I’m actually quite hopeful that LLMs can help with physics as well. In theoretical physics, you eventually run out of experiments that give you new data. As a humanity, we’ve built the Large Hadron Collider and it didn’t bring anything unexpected. We can describe everything in the world at a fundamental level with high precision. There are some gaps where we can’t do experiments. All the experimental data we do have, we can describe. But there are internal problems with the theories — they’re essentially a collection of steps and algorithms for how to predict something, and they work, but there are known issues.
Progress depends entirely on how many extremely smart people you have, each of whom might spend years just testing one good hypothesis — a lot of calculations, a long time. Breakthroughs happen roughly once a decade. It’s a fundamentally glacial process, and reaching the level of understanding needed to contribute takes decades itself — you have to start early and move faster than the average person. Take someone like Steven Weinberg — you can roughly approximate the field’s progress by the number of Weinbergs you have in the world, maybe a few hundred optimistically. For each of them, testing a good hypothesis takes years of calculation.
[00:57:15] Victoria Melnikova: So with AI you can actually—
[00:57:16] Stas Kelvich: With AI, I think we could be less constrained by the number of Weinbergs. If it reaches the same level of sophistication — it’s getting close. Math is a field where AI progress is very good. Physics is trickier because it’s less formally verifiable, but it’s still pretty close. You can imagine progress getting faster, and as a user you don’t have to be quite as smart. You just have to be smart enough to validate the output.
[00:57:50] Victoria Melnikova: Interesting. So we’re hopeful for physics — that’s the summary. Finally, I want to give you a space to invite our listeners to try Neon. How would they get started?
[00:58:06] Stas Kelvich: Sure. Go to neon.com, create your database, play around, create a few branches.
[00:58:13] Victoria Melnikova: Perfect. Thank you so much, Stas. And thank you for catching yet another episode of Dev Propulsion Labs. We at Evil Martians transform growth-stage startups into unicorns, build developer tools, and create open source products. If you’re a developer tools company and need help with product design, development, or SRE, visit evilmartians.com/devtools. See you in the next one.

Video
Audio

Explore more events

Book a call

Irina Nazarova CEO at Evil Martians

Evil Martians transform growth-stage startups into unicorns, build developer tools, and create open source products. Hire us to design and build your product