Home » Technology » The AI industry’s biggest week: Google’s rise, RL mania, and a party boat

Share This Post

Technology

The AI industry’s biggest week: Google’s rise, RL mania, and a party boat

The AI industry’s biggest week: Google’s rise, RL mania, and a party boat

I asked attendees for their takeaways from this year’s NeurIPS in San Diego.

I asked attendees for their takeaways from this year’s NeurIPS in San Diego.

STK093_GOOGLE_E
STK093_GOOGLE_E
Alex Heath
is a contributing writer and author of the Sources newsletter.

This is an excerpt of Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.

Reinforcement learning (RL) is the next frontier, Google is surging, and the party scene has gotten completely out of hand. Those were the through lines from this year’s NeurIPS in San Diego.

NeurIPS, or the “Conference on Neural Information Processing Systems,” started in 1987 as a purely academic affair. It has since ballooned alongside the hype around AI into a massive industry event where labs come to recruit and investors come to find the next wave of AI startups.

I was regretfully unable to attend NeurIPS this year, but I still wanted to know what people were talking about on the ground in San Diego over the past week. So I asked engineers, researchers, and founders for their takeaways. The list below of responses includes Andy Konwinski, cofounder of Databricks and founder of the Laude Institute; Thomas Wolf, cofounder of Hugging Face; OpenAI’s Roon; and attendees from Meta, Waymo, Google DeepMind, Amazon, and a handful of other places.

I asked everyone the same three questions: What’s the buzziest topic from the conference? Which labs feel like they’re surging or struggling? Who had the best party?

The consensus was clear. “RL RL RL RL is taking over the world,” Anastasios Angelopoulos, CEO of LMArena, told me. The industry is coalescing around the idea that tuning models for specific use cases, rather than scaling the data used for pre-training, will drive the next wave of AI progress. What’s clear from the lab momentum question is that Google is having a moment. “Google DeepMind is feeling good,” Hugging Face’s Wolf told me.

The party circuit was naturally relentless. Konwinski’s Laude Lounge emerged as one of the week’s hotspots — Jeff Dean, Yoshua Bengio, Ion Stoica, and about a dozen other top researchers came through. Model Ship, an invite-only cruise with 200 researchers, featured “a commitment to the dance floor that is unprecedented at a conference event,” one of the organizers of the cruise, Nathan Lambert, told me. Roon was dry about the whole scene: “you can learn more from twitter than from literally being there … mostly my on-the-ground feeling was ‘this is too much.’”

Here’s what attendees had to say about NeurIPS this year:

What was the buzziest topic among attendees that you think more people will be talking about in 2026?

  • Andy Konwinski, founder of the Laude Institute: “I did a lot of interviews over the week, and when I asked people what felt overhyped to them, I heard agentic AI, RL, and world models, though I also heard RL and world models as areas people think are up-and-coming and most interesting to watch.”
  • Thomas Wolf, cofounder of Hugging Face: “AI x science, interpretability, RL long rollouts”
  • Roon, member of technical staff, OpenAI: “you can learn more from twitter than from literally being there / the tweets are saying the buzz is about continual learning / That’s possibly true / I can’t guarantee / mostly my on-the-ground feeling was ‘this is too much’”
  • Maya Bechler-Speicher, research scientist at Meta: “I can’t say with certainty what the buzziest topic was — the conference is massive, and my exposure was naturally limited — but tabular foundation models were undoubtedly gaining significant traction, and I expect this momentum to continue into 2026. After years in which decision-tree–based methods dominated generalization on tabular data, we are finally seeing foundation-model approaches that consistently outperform them. Another area drawing considerable attention is physical AI, which remains full of open research questions and opportunities.”
  • Anonymous researcher at a big AI lab: “I’m biased here, but AI for the physical world (robotics, engineering, etc, not just AI for science) looks like it’s finally taking off.”
  • Nathan Lambert, senior researcher at the Allen Institute for AI: “It was accepted that [Ilya Sutskever]‘s proclamation on the Dwarkesh Podcast that it’s now ‘The Age of Research’ rather than the age of scaling is a good moniker. No one area of the poster sessions or workshops was obviously labeled as the most important topic (e.g., last year’s NeurIPS was obsessed with reinforcement learning and reasoning after the launch of o1). Some groups reflected solemnly on how this was the first NeurIPS since DeepSeek R1 and a year of open model transformation, but most of the conference didn’t feel like it had an active role to play in it.”
  • Brian Wilt, head of data at Waymo: “The buzziest topic among my friends was how much research was happening in frontier labs vs. academia and was likely unpublished.. From my perspective at Waymo, many of the (applied) problems I need to solve only emerge at scale (e.g., data, performance). However, there’s also a deep sense that we need another fundamental breakthrough besides scaling current architectures (as Ilya/[Andrej] Karpathy/others have alluded to)”
  • Evgenii Nikishin, member of technical staff at OpenAI: “Continual learning was certainly among the buzziest topics. I don’t know yet how many scientific advances there will be in 2026 — maybe some, maybe little — but I think more people will be talking about it.”
  • Paige Bailey, developer lead for Google DeepMind: “Definitely sovereign open models, especially deploying them on-prem with fine-tuning + RL. In terms of what people will be talking about in 2026, I think world Models and robotics are the big ones.”
  • Sachin Dharashivkar, CEO of AthenaAgent: “Designing RL environments and training agents was the most discussed topic.”
  • Ronak Malde, ex-DeepMind engineer and new founder of a stealth RL startup: “Continual learning. To support this next frontier, we’re going to need new architectures, new reward functions, new data sources, and new data scalability models.”
  • Deniz Birlikci, researcher at Amazon: “Agents are not a model — they are a stack. Therefore, RL for agents should train with the same tools/stacks that will be used in production. More teams are thinking [about] how to create a dense taxonomy and labeling for their data, especially in RL, and I find this very important.”
  • Richard Suwandi, student ambassador for The Chinese University of Hong Kong: “There were lots of discussions around whether we can build AI systems that are truly creative (not just optimizing within known boundaries, but capable of generating genuinely novel ideas and discoveries on their own). I expect this to become a major research frontier in 2026.”
  • Anastasios Angelopoulos, CEO of LMArena: “RL RL RL RL is taking over the world”

Which labs feel like they’re surging in momentum, and which ones feel more shaky?

  • Nathan Lambert (Allen Institute for AI): “The discussion of which labs are leading and falling behind felt fully like an export out of SF gossip in the last few weeks. Gemini and Anthropic are ascendant at the cost of OpenAI. At least OpenAI was mentioned, where I don’t think I heard anyone debating the capabilities of xAI once.”
  • Evgenii Nikishin (OpenAI): “The Big 3 frontier Labs (GDM, Anthro, OAI) are having a good overall momentum, though each has their unique stronger and weaker sides. As for places that are not doing too great, think about quite a few LLM / imagen startups from 2022-2024 who were offering similar pitches and didn’t have unique value prop. I feel that many of them either already or are in the process of quietly dying.”
  • Andy Konwinski (Laude Institute): “Surging momentum: Alibaba/Qwen, Moonshot/Kimi, Arcee, Reflection AI, Human&, Prime Intellect all made announcements that very recently that were buzzing / Google w/ gemini 3, nano banana, TPUv7”
  • Anonymous researcher: “Reflection had a massive booth given that they’re a very young startup – that’s definitely new.”
  • Brian Wilt (Waymo): “I was proud that Alphabet/Google had the most accepted papers this year.”
  • Paige Bailey (Google DeepMind): “Periodic Labs and Reflection AI feel like they are surging; they both have really interesting mission statements. I also loved seeing Anna and Azalea launch a company (Ricursive Intelligence).”
  • Ronak Malde (stealth RL startup): “Several neolabs are going to launch in 2026 that shake up research as we know it. DeepMind is still crushing it. Kimi Moonshot and Deepseek are too.”
  • Richard Suwandi (The Chinese University of Hong Kong): “One lab that clearly feels like it’s surging is Google DeepMind. At NeurIPS, you could really feel them pushing a new research agenda, with things like Nested Learning and Titans/MIRAS pointing toward more continual, long‑term memory rather than just bigger transformers, which was a refreshing shift in the hallway conversations.”
  • Thomas Wolf (Hugging Face): “Google DeepMind is feeling good.”

What was the best party you attended or had FOMO over?

  • Nathan Lambert (Allen Institute for AI/Model Ship co-organizer): “The paradigmatic example of a NeurIPS party for the current area of AI was Model Ship, an invite-only cruise with 200 top researchers, investors, and personalities in the AI space. It had bespoke merch, free conversation, and a commitment to the dance floor that is unprecedented at a conference event.”
  • Andy Konwinski (Laude Institute): “I was a bit bummed that I couldn’t make it out to events organized by Robert Nishihara, Naveen Rao, and Nathan Lambert. I also was sad to miss Rich Sutton and Yejin Choi’s keynotes (though I ended up interviewing Yejin so we got to jam on the topics she spoke about).”
  • Roon (OpenAI): “openai ones, a16z ones / I liked the a16z one because I got to meet lex [Fridman] that was cool / but even the parties I mostly tried to avoid kept getting partifuls that were like 750 people in a house or whatever / what a nightmare”
  • Maya Bechler-Speicher (Meta): “The Meta party was one of the most impressive company events I’ve attended. Additionally, G-Research invited a very small group of researchers to a three-star Michelin restaurant, which was not a party per se but was absolutely exceptional.”
  • Brian Wilt (Waymo): “My favorite event was a small gathering at comma.ai (HQ’d in San Diego), who develop an open-source driver assistant. I use it on my personal car, it’s perfect for when I’m not riding in Waymo in Phoenix. @yassineyousfi_ put together an online capture-the-flag to get in. @realGeorgeHotz took us on a tour of their data center and manufacturing. I did die a little when I typed their wifi password, ‘lidarisdoomed’”
  • Evgenii Nikishin (OpenAI): “The OpenAI party 😎”
  • Paige Bailey (Google DeepMind): “I actually had to head back late Friday/early Saturday, so I missed out on the end-of-conference workshops. I had major FOMO over the ML for Systems workshop, though, as well as the ‘Claude and Gemini Play Pokemon’ workshop — they both looked awesome!”
  • Ronak Malde (stealth RL startup): “Radical VC bringing Jeff Dean and Geoffrey Hinton into one room was the highlight of the week.”
  • Anastasios Angelopoulos (LMArena): “Laude Lounge”
  • Thomas Wolf (Hugging Face): “The Hugging Face party where 2.5k+ people registered / I really enjoyed the Prime-intellect one”
  • Dylan Patel, founder of SemiAnalysis: “Mine haha”

Yes, some people thought keynotes were parties. I guess academia lives on at NeurIPS after all.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Most Popular

Share This Post

Leave a Reply