Synopsis
With new interviews thrice-weekly, The New Stack Makers stream of featured speakers and interviews is all about the new software stacks that change the way we development and deploy software. For The New Stack Analysts podcast, please see https://soundcloud.com/thenewstackanalysts.For The New Stack @ Scale podcast, please see https://soundcloud.com/thenewstackatscaleSubcribe to TNS on YouTube at: https://www.youtube.com/c/TheNewStack
Episodes
-
How Heroku Is ‘Re-Platforming’ Its Platform
24/04/2025 Duration: 18minHeroku has been undergoing a major transformation, re-platforming its entire Platform as a Service (PaaS) offering over the past year and a half. This ambitious effort, dubbed “Fir,” will soon reach general availability. According to Betty Junod, CMO and SVP at Heroku (owned by Salesforce), the overhaul includes a shift to Kubernetes and OCI standards, reinforcing Heroku’s commitment to open source. The platform now features Heroku Cloud Native Buildpacks, which let developers create container images without Dockerfiles. Originally built on Ruby on Rails and predating Docker and AWS, Heroku now supports eight programming languages. The company has also deepened its open source engagement by becoming a platinum member of the Cloud Native Computing Foundation (CNCF), contributing to projects like OpenTelemetry. Additionally, Heroku has open sourced its Twelve-Factor Apps methodology, inviting the community to help modernize it to address evolving needs such as secrets management and workload identity. This sign
-
Container Security and AI: A Talk with Chainguard's Founder
22/04/2025 Duration: 20minIn this episode of The New Stack Makers, recorded at KubeCon + CloudNativeCon Europe, Alex Williams speaks with Ville Aikas, Chainguard founder and early Kubernetes contributor. They reflect on the evolution of container security, particularly how early assumptions—like trusting that users would validate container images—proved problematic. Aikas recalls the lack of secure defaults, such as allowing containers to run as root, stemming from the team’s internal Google perspective, which led to unrealistic expectations about external security practices.The Kubernetes community has since made strides with governance policies, secure defaults, and standard practices like avoiding long-lived credentials and supporting federated authentication. Aikas founded Chainguard to address the need for trusted, minimal, and verifiable container images—offering zero-CVE images, transparent toolchains, and full SBOMs. This security-first philosophy now extends to virtual machines and Java dependencies via Chainguard Libraries.T
-
Kelsey Hightower, AWS's Eswar Bala on Open Source's Evolution
17/04/2025 Duration: 37minIn a candid episode of The New Stack Makers, Kubernetes pioneer Kelsey Hightower and AWS’s Eswar Bala explored the evolving relationship between enterprise cloud providers and open source software at KubeCon+CloudNativeCon London. Hightower highlighted open source's origins as a grassroots movement challenging big vendors, and shared how it gave people—especially those without traditional tech credentials—a way into the industry. Recalling his own journey, Hightower emphasized that open source empowered individuals through contribution over credentials.Bala traced the early development of Kubernetes and his own transition from building container orchestration systems to launching AWS’s Elastic Kubernetes Service (EKS), driven by growing customer demand. The discussion, recorded at KubeCon + CloudNativeCon Europe, touched on how open source is now central to enterprise cloud strategies, with AWS not only contributing but creating projects like Karpenter, Cedar, and Kro.Both speakers agreed that open source's c
-
The Kro Project: Giving Kubernetes Users What They Want
15/04/2025 Duration: 21minIn a rare show of collaboration, Google, Amazon, and Microsoft have joined forces on Kro — the Kubernetes Resource Orchestrator — an open source, cloud-agnostic tool designed to simplify custom resource orchestration in Kubernetes. Announced during KubeCon + CloudNativeCon Europe, Kro was born from strong customer demand for a Kubernetes-native solution that works across cloud providers without vendor lock-in. Nic Slattery, Product Manager at Google and Jesse Butler, Principal Product Manager, AWS shared with The New Stack that unlike many enterprise products, Kro didn’t stem from top-down strategy but from consistent customer "pull" experienced by all three companies. It aims to reduce complexity by allowing platform teams to offer simplified interfaces to developers, enabling resource requests without needing deep service-specific knowledge. Kro also represents a unique cross-company collaboration, driven by a shared mission and open source values. Though still in its alpha stage, the project has already at
-
OpenSearch: What’s Next for the Search and Analytics Suite?
10/04/2025 Duration: 20minOpenSearch has evolved significantly since its 2021 launch, recently reaching a major milestone with its move to the Linux Foundation. This shift from company-led to foundation-based governance has accelerated community contributions and enterprise adoption, as discussed by NetApp’s Amanda Katona in a New Stack Makers episode recorded at KubeCon + CloudNativeCon Europe. NetApp, an early adopter of OpenSearch following Elasticsearch’s licensing change, now offers managed services on the platform and contributes actively to its development.Katona emphasized how neutral governance under the Linux Foundation has lowered barriers to enterprise contribution, noting a 56% increase in downloads since the transition and growing interest from developers. OpenSearch 3.0, featuring a Lucene 10 upgrade, promises faster search capabilities—especially relevant as data volumes surge. NetApp’s ongoing investments include work on machine learning plugins and developer training resources.Katona sees the Linux Foundation’s invol
-
Kong’s AI Gateway Aims to Make Building with AI Easier
03/04/2025 Duration: 21minAI applications are evolving beyond chatbots into more complex and transformative solutions, according to Marco Palladino, CTO and co-founder of Kong. In a recent episode of The New Stack Makers, he discussed the rise of AI agents, which act as "virtual employees" to enhance organizational efficiency. For instance, AI can now function as a product manager for APIs—analyzing documentation, detecting inaccuracies, and making corrections.However, reliance on AI agents brings security risks, such as data leakage and governance challenges. Organizations need observability and safeguards, but developers often resist implementing these requirements manually. As GenAI adoption matures, teams seek ways to accelerate development without rebuilding security measures repeatedly.To address these challenges, Kong introduced AI Gateway, an open-source plugin for its API Gateway. AI Gateway supports multiple AI models across providers like AWS, Microsoft, and Google, offering developers a universal API to integrate AI secure
-
What’s the Future of Platform Engineering?
27/03/2025 Duration: 26minPlatform engineering was meant to ease the burdens of Devs and Ops by reducing cognitive load and repetitive tasks. However, building internal development platforms (IDPs) has proven challenging. Despite this, Gartner predicts that by 2026, 80% of software engineering organizations will have a platform team.In a recent New Stack Makers episode, Mallory Haigh of Humanitec and Nathen Harvey of Google discussed the current state and future of platform engineering. Haigh emphasized that many organizations rush to build IDPs without understanding why they need them, leading to ineffective implementations. She noted that platform engineering is 10% technical and 90% cultural change, requiring deep introspection and strategic planning.AI-driven automation, particularly agentic AI, is expected to shape platform engineering’s future. Haigh highlighted how AI can enhance platform orchestration and optimize GPU resource management. Harvey compared platform engineering to generative AI—both aim to reduce toil and improve
-
AI Agents are Dumb Robots, Calling LLMs
20/03/2025 Duration: 28minAI agents are set to transform software development, but software itself isn’t going anywhere—despite the dramatic predictions. On this episode of The New Stack Makers, Mark Hinkle, CEO and Founder of Peripety Labs, discusses how AI agents relate to serverless technologies, infrastructure-as-code (IaC), and configuration management. Hinkle envisions AI agents as “dumb robots” handling tasks like querying APIs and exchanging data, while the real intelligence remains in large language models (LLMs). These agents, likely implemented as serverless functions in Python or JavaScript, will automate software development processes dynamically. LLMs, leveraging vast amounts of open-source code, will enable AI agents to generate bespoke, task-specific tools on the fly—unlike traditional cloud tools from HashiCorp or configuration management tools like Chef and Puppet. As AI-generated tooling becomes more prevalent, managing and optimizing these agents will require strong observability and evaluation practices. According
-
Goodbye SaaS, Hello AI Agents
13/03/2025 Duration: 30minThe transition from SaaS to Services as Software with AI agents is underway, necessitating new orchestration methods similar to Kubernetes for containers. AI agents will require resource allocation, workflow management, and scalable infrastructure as they evolve. Two key trends are driving this shift: Data Evolution – From spreadsheets to AI agents, data has progressed through relational databases, big data, predictive analytics, and generative AI. Computing Evolution – Starting from mainframes, the journey has moved through desktops, client servers, web/mobile, SaaS, and now agentic workflows. Janakiram MSV, an analyst, notes on this episode of The New Stack Makers that SaaS depends on data—without it, platforms like Salesforce and SAP lack value. As data becomes more actionable and compute more agentic, a new paradigm emerges: Services as Software. AI agents will automate tasks previously requiring human intervention, like emails and sales follow-ups. However, orchestrating them will be complex, akin to Kub
-
How Generative AI Is Reshaping the SDLC
06/03/2025 Duration: 21minAmazon Q Developer is streamlining the software development lifecycle by integrating AI-powered tools into AWS. In an interview at AWS in Seattle, Srini Iragavarapu, director of generative AI Applications and Developer Experiences at AWS, discussed how Amazon Q Developer enhances the developer experience. Initially focused on inline code completions, Amazon Q Developer evolved by incorporating generative AI models like Amazon Nova and Anthropic models, improving recommendations and accelerating development. British Telecom reported a 37% acceptance rate for AI-generated code.Beyond code completion, Amazon Q Developer enables developers to interact with Q for code reviews, test generation, and migrations. AWS also developed agentic frameworks to automate undifferentiated tasks, such as upgrading Java versions. Iragavarapu noted that internally, AWS used Q Developer to migrate 30,000 production applications, saving $260 million annually. The platform offers code generation, testing suites, RAG capabilities, and
-
OAuth Works for AI Agents but Scaling is Another Question
27/02/2025 Duration: 25minMaya Kaczorowski noticed that AI identity and AI agent identity concerns were emerging from outside the security industry, rather than from CISOs and security leaders. She concluded that OAuth, the open standard for authentication, already serves the purpose of granting access without exposing passwords. Kaczorowski, a respected technologist and founder of Oblique, a startup focused on self-serve access controls, recently wrote about OAuth and AI agents and shared her insights on this episode of The New Stack Makers. She noted that developers see AI agents as extensions of themselves, granting them limited access to data and capabilities—precisely what OAuth is designed to handle. The challenges with AI agent identity are vast, involving different approaches to authentication, such as those explored by companies like AuthZed. While existing authorization models like RBAC or ABAC may still apply, the real challenge lies in scale. The exponential growth of AI-related entities—from users to LLMs—could mean even
-
LLMs and AI Agents Evolving Like Programming Languages
20/02/2025 Duration: 28minThe rise of the World Wide Web enabled developers to build tools and platforms on top of it. Similarly, the advent of large language models (LLMs) allows for creating new AI-driven tools, such as autonomous agents that interact with LLMs, execute tasks, and make decisions. However, verifying these decisions is crucial, and critical reasoning may be a solution, according to Yam Marcovitz, tech lead at Parlant.io and CEO of emcie.co.Marcovitz likens LLM development to the evolution of programming languages, from punch cards to modern languages like Python. Early LLMs started with small transformer models, leading to systems like BERT and GPT-3. Now, instead of mere text auto-completion, models are evolving to enable better reasoning and complex instructions.Parlant uses "attentive reasoning queries (ARQs)" to maintain consistency in AI responses, ensuring near-perfect accuracy. Their approach balances structure and flexibility, preventing models from operating entirely autonomously. Ultimately, Marcovitz argues
-
Writing Code About Your Infrastructure? That's a Losing Race
13/02/2025 Duration: 31minAdam Jacob, CEO of System Initiative, discusses a shift in infrastructure automation—moving from writing code to building models that enable rapid simulations and collaboration. In The New Stack Makers, he compares this approach to Formula One racing, where teams use high-fidelity models to simulate race conditions, optimizing performance before hitting the track.System Initiative applies this concept to enterprise automation, creating a model that understands how infrastructure components interact. This enables fast, multiplayer feedback loops, simplifying complex tasks while enhancing collaboration. Engineers can extend the system by writing small, reactive JavaScript functions that automate processes, such as transforming existing architectures into new ones. The platform visually represents these transformations, making automation more intuitive and efficient.By leveraging models instead of traditional code-based infrastructure management, System Initiative enhances agility, reduces complexity, and improv
-
OpenTelemetry: What’s New with the 2nd Biggest CNCF Project?
06/02/2025 Duration: 30minMorgan McLean, co-founder of OpenTelemetry and senior director of product management at Splunk, has long tackled the challenges of observability in large-scale systems. In a conversation with Alex Williams onThe New Stack Makers, McLean reflected on his early frustrations debugging high-scale services and the need for better observability tools.OpenTelemetry, formed in 2019 from OpenTracing and OpenCensus, has since become a key part of modern observability strategies. As a Cloud Native Computing Foundation (CNCF) incubating project, it’s the second most active open source project after Kubernetes, with over 1,200 developers contributing monthly. McLean highlighted OpenTelemetry’s role in solving scaling challenges, particularly in Kubernetes environments, by standardizing distributed tracing, application metrics, and data extraction.Looking ahead, profiling is set to become the fourth major observability signal alongside logs, tracing, and metrics, with general availability expected in 2025. McLean emphasize
-
What’s Driving the Rising Cost of Observability?
30/01/2025 Duration: 24minObservability is expensive because traditional tools weren’t designed for the complexity and scale of modern cloud-native systems, explains Christine Yen, CEO of Honeycomb.io. Logging tools, while flexible, were optimized for manual, human-scale data reading. This approach struggles with the massive scale of today’s software, making logging slow and resource-intensive. Monitoring tools, with their dashboards and metrics, prioritized speed over flexibility, which doesn’t align with the dynamic nature of containerized microservices. Similarly, traditional APM tools relied on “magical” setups tailored for consistent application environments like Rails, but they falter in modern polyglot infrastructures with diverse frameworks.Additionally, observability costs are rising due to evolving demands from DevOps, platform engineering, and site reliability engineering (SRE). Practices like service-level objectives (SLOs) emphasize end-user experience, pushing teams to track meaningful metrics. However, outdated observab
-
How Oracle Is Meeting the Infrastructure Needs of AI
23/01/2025 Duration: 27minGenerative AI is a data-driven story with significant infrastructure and operational implications, particularly around the rising demand for GPUs, which are better suited for AI workloads than CPUs. In an episode ofThe New Stack Makersrecorded at KubeCon + CloudNativeCon North America, Sudha Raghavan, SVP for Developer Platform at Oracle Cloud Infrastructure, discussed how AI’s rapid adoption has reshaped infrastructure needs.The release of ChatGPT triggered a surge in GPU demand, with organizations requiring GPUs for tasks ranging from testing workloads to training large language models across massive GPU clusters. These workloads run continuously at peak power, posing challenges such as high hardware failure rates and energy consumption.Oracle is addressing these issues by building GPU superclusters and enhancing Kubernetes functionality. Tools like Oracle’s Node Manager simplify interactions between Kubernetes and GPUs, providing tailored observability while maintaining Kubernetes’ user-friendly experience
-
Arm: See a Demo About Migrating a x86-Based App to ARM64
16/01/2025 Duration: 21minThe hardware industry is surging, driven by AI's demanding workloads, with Arm—a 35-year-old pioneer in processor IP—playing a pivotal role. In an episode ofThe New Stack Makersrecorded at KubeCon + CloudNativeCon North America, Pranay Bakre, principal solutions engineer at Arm, discussed how Arm is helping organizations migrate and run applications on its technology.Bakre highlighted Arm’s partnership with hyperscalers like AWS, Google, Microsoft, and Oracle, showcasing processors such as AWS Graviton and Google Axion, built on Arm’s power-efficient, cost-effective Neoverse IP. This design ethos has spurred wide adoption, with 90-95% of CNCF projects supporting native Arm binaries.Attendees at Arm’s booth frequently inquired about its plans to support AI workloads. Bakre noted the performance advantages of Arm-based infrastructure, delivering up to 60% workload improvements over legacy architectures. The episode also features a demo on migrating x86 applications to ARM64 in both cloud and containerized envir
-
Heroku Moved Twelve-Factor Apps to Open Source. What’s Next?
02/01/2025 Duration: 22minHeroku has open-sourced its Twelve-Factor App methodology, initially created in 2011 to help developers build portable, resilient cloud applications. Heroku CTO Gail Frederick announced this shift at KubeCon + CloudNativeCon North America, explaining the move aims to involve the community in modernizing the framework. While the methodology inspired a generation of cloud developers, certain factors are now outdated, such as the focus on logs as event streams. Frederick highlighted the need for updates to address current practices like telemetry and metrics visualization, reflecting the rise of OpenTelemetry.The updated Twelve-Factor methodology will expand to accommodate modern cloud-native realities, such as deploying interconnected systems of apps with diverse backing services. Planned enhancements include supporting documents, reference architectures, and code examples illustrating the principles in action. Success will be measured by its applicability to use cases involving edge computing, IoT, serverless,
-
How Falco Brought Real-Time Observability to Infrastructure
26/12/2024 Duration: 19minFalco, an open-source runtime observability and security tool, was created by Sysdig founder Loris Degioanni to collect real-time system events directly from the kernel. Leveraging eBPF technology for improved safety and performance, Falco gathers data like pod names and namespaces, correlating them with customizable rules. Unlike static analysis tools, it operates in real-time, monitoring events as they occur. In this episode of The New Stack Makers, TNS Editor-in-Chief, Heather Joslyn spoke with Thomas Labarussias, Senior Developer Advocate at Sysdig, Leonardo Grasso, Open Source Tech Lead Manager at Sysdig and Luca Guerra, Sr. Open Source Engineer at Sysdig to get the latest update on Falco. Graduating from the Cloud Native Computing Foundation (CNCF) in February 2023 after entering its sandbox six years prior, Falco’s maintainers have focused on technical maturity and broad usability. This includes simplifying installations across diverse environments, thanks in part to advancements from the Linux Foundat
-
How cert-manager Got to 500 Million Downloads a Month
19/12/2024 Duration: 23minJetstack’s cert-manager, a leading open-source project in Kubernetes certificate management, began as a job interview challenge. Co-founder Matt Barker recalls asking a prospective engineer to automate Let’s Encrypt within Kubernetes. By Monday, the candidate had created kube-lego, which evolved into cert-manager, now downloaded over 500 million times monthly.Cert-manager’s journey to CNCF graduation, achieved in September, began with its donation to the foundation four years ago. Relaunched as cert-manager, the project grew under engineer James Munnelly, becoming the de facto standard for certificate lifecycle management. The thriving community and ecosystem around cert-manager highlighted its suitability for CNCF stewardship. However, maintainers, including Ashley Davis, noted challenges in navigating differing opinions within its vast user base.With graduation achieved, cert-manager’s roadmap includes sub-projects like trust-manager, addressing TLS trust bundle management and Istio integration. Barker aims