After 15 years, HubSpot’s flagship conference is getting a rebrand. Not because INBOUND didn’t work, but because it did. The event built an entire movement around inbound methodology, and that philosophy shaped how a whole generation of us think about marketing and sales.
But, things have changed. AI has rewritten the rules on how GTM teams operate. Buyers find information through AI-powered search and social platforms that didn’t even exist when INBOUND started. And HubSpot has grown far beyond its original roots into a full customer platform handling marketing, sales, service, and operations.
From the release: “UNBOUND is about more than a new name. It’s about growth without the constraints of old playbooks, old channels, and old ways of thinking about what’s possible. It’s built on everything that made INBOUND great — the community of 10,000+ professionals committed to growth, the energy, changemakers, and thought leaders. What’s changing is the scope.”
Team Hypha is on board with the change. It does makes sense. Our work has expanded outside of just inbound marketing. The platform evolved, our strategies evolved, HubSpot evolved. It’s a symbolic move that opens up what’s possible instead of tying everything to one methodology.
I will say though, it’s going to take me a minute to stop calling it INBOUND!
-Sage Levene, VP of Marketing, Hypha HubSpot Development
Open Mic
Building the Foundation
By Jon Chim, VP of Design & Development, Hypha HubSpot Development
We’re currently building a design system that brings together all of Hypha’s custom modules, templates, and snippets that we’ve developed over the years into one central hub. The goal is to give our team a shared foundation/boilerplate to allow them to move faster, stay consistent, and use it as the starting point for a new project.
It’s the kind of work that’s been easy to put off. But the impact will show up once it goes live and hopefully allow us to free up our time in other areas where we are needed most.
Without a system, things drift, spacing becomes a guess, components start becoming disjointed, fields and naming conventions get messy. Our designers and developers end up producing slightly different versions of the same thing. A system keeps this from happening.
As we’re building this out, we’re being challenged with ensuring that it doesn’t become a constraint and a system that tells the team you can only design this way. My hope is that we build a foundation solid enough so that the team feels free to push beyond it. Clear enough to move fast, and flexible enough to adapt.
That’s the part that requires ongoing judgment. A system is only as good as the people maintaining it—knowing when to add something new, when to retire something outdated, and when a one-off solution is a signal that the system needs to evolve. It’s living documentation, not a locked archive.
A design system shouldn’t be the ceiling. It should be what makes the ceiling easier to break through. That balance is harder to get right than it sounds—and it’s a great design problem to solve together as a team.
A must-read whether you’re an AI newbie or an expert. No bombshells, no smoking gun, but a clear look at Altman’s pattern of exaggeration and manipulation.
“Even people close to Altman find it difficult to know where his ‘hope for humanity’ ends and his ambition begins. His greatest strength has always been his ability to convince disparate groups that what he wants and what they need are one and the same. He made use of a unique historical juncture, when the public was wary of tech-industry hype and most of the researchers capable of building A.G.I. were terrified of bringing it into existence. Altman responded with a move that no other pitchman had perfected: he used apocalyptic rhetoric to explain how A.G.I. could destroy us all—and why, therefore, he should be the one to build it. Maybe this was a premeditated masterstroke. Maybe he was fumbling for an advantage. Either way, it worked.”
An interesting piece that takes aim at the flashy (and often predatory) startups guaranteeing to increase your brand’s mentions in LLMs and the like. Hypha’s consistent take? SEO and AEO work together. The foundations we’ve built up over the years do not need to be thrown out, and as always, there’s no magic wand to guarantee an exact result.
From the first piece:
“A recent SparkToro report found that on desktop, searches on traditional search engines still dwarf searches via AI tools; Amazon, Bing, and YouTube had a larger share of search activity than ChatGPT, according to the analysis. Yet relatively few companies, if any, are prioritizing visibility on these other platforms, Fishkin argues — instead there’s ‘executive mania,’ press and media attention, and a hype cycle around AI search specifically.”
Hypha Highlights
Rebranded at INBOUND 2025, Data Hub—HubSpot’s renamed Operations Hub—carried over its pricing tiers, core automation logic, and most of its feature set intact. What also carried over, and shouldn’t, is how most teams are evaluating the Pro vs. Enterprise decision.
The most common mistake in this evaluation isn’t picking the wrong tier—it’s running the comparison against the wrong feature set. Teams frequently disqualify themselves from Pro because they assume the Data Quality Command Center and custom code actions are Enterprise-only. They’re not. Both are available starting at Professional. Meanwhile, the features that actually separate Enterprise from Pro—native data warehouse integrations, Reverse ETL, Datashare, and significantly higher operational volume limits—often don’t register as relevant until a team has already hit the ceiling.
This piece breaks down where the tiers actually differ, what Enterprise unlocks that Pro can’t approximate, and when the $1,200/month price gap between tiers makes operational sense.
“TikTok announced an expanded partnership with HubSpot, which will give HubSpot customers more tools to manage their TikTok ads within the digital content management platform.
“The update will enable TikTok onboarding, lead management and measurement capabilities within HubSpot’s Marketing Hub software, expanding the opportunities for TikTok to drive more business activity.”
AI in Action
News, updates and tools from the AI industry.
OpenAI and Anthropic’s confidential financial documents ahead of their planned IPOs reveal soaring AI model training costs. Both companies report two versions of earnings—one including and one excluding training costs. Despite massive losses, both companies expect to more than double revenue this year thanks to enterprise customers adopting AI tools, with OpenAI projecting $25 billion in 2026 revenue and Anthropic at $19 billion, though inference costs currently eat into more than half of revenue for each company.
Anthropic launched Project Glasswing with partners including AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks to secure critical software using Claude Mythos Preview, an unreleased frontier model that has found thousands of zero-day vulnerabilities including some in every major operating system and web browser. The company does not plan to make the model generally available due to its powerful offensive capabilities.
Anthropic launched Claude Managed Agents, a new product that provides out-of-the-box infrastructure for businesses to build and deploy AI agents, simplifying what was previously a complex distributed-systems engineering problem. The tool includes an agent harness with software tools, a memory system, a built-in sandboxed environment, the ability to run agents autonomously for hours in the cloud, and monitoring capabilities.
The New York Times commissioned AI startup Oumi to analyze Google’s AI Overviews accuracy using the industry-standard SimpleQA benchmark, finding they were accurate 91% of the time with Gemini 3 (up from 85% with Gemini 2), meaning Google provides tens of millions of erroneous answers every hour across its five trillion annual searches. More concerning, 56% of accurate responses were “ungrounded”—linking to websites that didn’t completely support the information provided—with Facebook and Reddit being the second- and fourth-most-cited sources, and Facebook cited 7% of the time when AI Overviews were inaccurate, making it difficult to verify the accuracy of responses.
Google released “Google AI Edge Eloquent” on iOS, an offline voice dictation app that transcribes speech in real-time and automatically cleans up the text, with no subscriptions or usage caps. The app features a fully offline mode where conversations don’t leave the device, tools to make text more formal, short, or long, the ability to import custom dictionaries from Gmail, and optional Gemini integration to “enhance text polishing,” though early testing suggests accuracy is poor compared to apps using Whisper or Parakeet models.
Meta released Muse Spark, its first AI model since CEO Mark Zuckerberg’s multibillion-dollar spending spree on talent and infrastructure, which the company said is “purpose-built” for use across its products including Instagram, Facebook, and Threads with specialized capabilities in healthcare and shopping. Unlike Meta’s previous open Llama models, Muse Spark is a smaller, closed model that beat leading models from Google, OpenAI, and Anthropic on select benchmarks according to Meta’s own evaluations, though the company acknowledged the model “was not yet cutting-edge in certain areas.”
“As OpenAI prepares for its IPO, the company decided to…pay millions of dollars for a tech talk show? Is this just what rich tech guys like Sam Altman are like now?”
How can we help you?
Case Study: ABM with HubSpot
Most outbound teams hit the same ceiling: solid sequencing, active reps, but no clear signal on who’s actually in market. That’s where we started.
We worked with a developer-focused SaaS company to turn a standard HubSpot outbound setup into a functioning account-based engine—without replacing their stack.
The shift happened in three phases. First, get the foundation working: HubSpot as CRM, sequencing in place, reps actively prospecting. Then layer in intent. We pulled in live buying signals from tools like Factors.ai, G2, and Clay, and rebuilt ICP scoring around real-time behavior instead of static fit.
From there, the system becomes actionable. Accounts are scored at the account level, not the contact level. High-intent accounts route automatically to the right rep, with context on what they’re researching. A 48-hour SLA keeps follow-up tight, and leadership sees pipeline movement in real time—not just rep-reported activity.
The result is a shift from broad messaging to focused execution. Reps spend time on accounts that are actually in motion.
If your outbound motion is stuck at sequencing, we can map what it would take to layer in intent, scoring, and routing inside your current HubSpot setup.