VentureBeat Transformative tech coverage that matters
- NanoClaw solves one of OpenClaw’s biggest security issues — and it’s already powering the creator’s bizby carl.franzen@venturebeat.com (Carl Franzen) on February 11, 2026 at 3:37 pm
The rapid viral adoption of Austrian developer Peter Steinberger’s open source AI assistant OpenClaw in recent weeks has sent enterprises and indie developers into a tizzy.It’s easy to easy why: OpenClaw is freely available now and offers a powerful means of autonomously completing work and performing tasks across a user’s entire computer, phone, or even business with natural language prompts that spin up swarms of agents. Since its release in November 2025, it’s captured the market with over 50 modules and broad integrations — but its “permissionless” architecture raised alarms among developers and security teams. Enter NanoClaw, a lighter, more secure version which debuted under an open source MIT License on January 31, 2026, and achieved explosive growth—surpassing 7,000 stars on GitHub in just over a week. Created by Gavriel Cohen—an experienced software engineer who spent seven years at website builder Wix.com—the project was built to address the “security nightmare” inherent in complex, non-sandboxed agent frameworks. Cohen and his brother Lazer are also co-founders of Qwibit, a new AI-first go-to-market agency, and vice president and CEO, respectively, of Concrete Media, a respected public relations firm that often works with tech businesses covered by VentureBeat. NanoClaw’s immediate solution to this architectural anxiety is a hard pivot toward operating system-level isolation. The project places every agent inside isolated Linux containers—utilizing Apple Containers for high-performance execution on macOS or Docker for Linux environments. This creates a strictly “sandboxed” environment where the AI only interacts with directories explicitly mounted by the user. While other frameworks build internal “safeguards” or application-level allowlists to block certain commands, Gavriel maintains that such defenses are inherently fragile. “I’m not running that on my machine and letting an agent run wild,” Cohen explained during a recent technical interview. “There’s always going to be a way out if you’re running directly on the host machine. In NanoClaw, the ‘blast radius’ of a potential prompt injection is strictly confined to the container and its specific communication channel.”A more secure foundation for agentic autonomyThe technical critique at the heart of NanoClaw’s development is one of bloat and auditability. When Cohen first evaluated OpenClaw (formerly Clawbot), he discovered a codebase approaching 400,000 lines with hundreds of dependencies. In the fast-moving AI landscape, such complexity is an engineering hurdle and a potential liability. “As a developer, every open source dependency that we added to our codebase, you vet. You look at how many stars it has, who are the maintainers, and if it has a proper process in place,” Cohen notes. “When you have a codebase with half a million lines of code, nobody’s reviewing that. It breaks the concept of what people rely on with open source”.NanoClaw counters this by reducing the core logic to roughly 500 lines of TypeScript. This minimalism ensures that the entire system—from the state management to the agent invocation—can be audited by a human or a secondary AI in roughly eight minutes. The architecture employs a single-process Node.js orchestrator that manages a per-group message queue with concurrency control. Instead of heavy distributed message brokers, it relies on SQLite for lightweight persistence and filesystem-based IPC. This design choice is intentional: by using simple primitives, the system remains transparent and reproducible.Furthermore, the isolation extends beyond just the filesystem. NanoClaw natively supports Agent Swarms via the Anthropic Agent SDK, allowing specialized agents to collaborate in parallel. In this model, each sub-agent in a swarm can be isolated with its own specific memory context, preventing sensitive data from leaking between different chat groups or business functions.The product vision: Skills over featuresOne of the most radical departures in NanoClaw is its rejection of the traditional “feature-rich” software model. Cohen describes NanoClaw as “AI-native” software—a system designed to be managed and extended primarily through AI interaction rather than manual configuration. The project explicitly discourages contributors from submitting PRs that add broad features like Slack or Discord support to the main branch. Instead, they are encouraged to contribute “Skills”—modular instructions housed in .claude/skills/ that teach a developer’s local AI assistant how to transform the code.”If you want Telegram, rip out the WhatsApp and put in Telegram,” Cohen says. “Every person should have exactly the code they need to run their agent. It’s not a Swiss Army knife; it’s a secure harness that you customize by talking to Claude Code”. This “Skills over Features” model means that a user can run a command like /add-telegram or /add-gmail, and the AI will rewrite the local installation to integrate the new capability while keeping the codebase lean. This methodology ensures that if a user only needs a WhatsApp-based assistant, they aren’t forced to inherit the security vulnerabilities of fifty other unused modules.Real-world utility in an AI-native agencyThis isn’t merely a theoretical experiment for the Cohen brothers. Their new AI go-to-market agency Qwibit uses NanoClaw—specifically a personal instance named “Andy”—to run its internal operations. “Andy manages our sales pipeline for us. I don’t interact with the sales pipeline directly,” Cohen explained. The agent provides Sunday-through-Friday briefings at 9:00 AM, detailing lead statuses and assigning tasks to the team.The utility lies in the friction-less capture of data. Throughout the day, Lazer and Gavriel forward messy WhatsApp notes or email threads into their admin group. Andy parses these inputs, updates the relevant files in an Obsidian vault or SQLite database, and sets automated follow-up reminders. Because the agent has access to the codebase, it can also be tasked with recurring technical jobs, such as reviewing git history for “documentation drift” or refactoring its own functions to improve ergonomics for future agents.Strategic evaluation for the enterpriseAs the pace of change accelerates in early 2026, technical decision-makers are faced with a fundamental choice between convenience and control. For AI engineers focused on rapid deployment, NanoClaw offers a blueprint for what Cohen calls the “best harness” for the “best model”. By building on top of the Claude Agent SDK, NanoClaw provides a pathway to leverage state-of-the-art models (like Opus 4.6) within a framework that a lean engineering team can actually maintain and optimize.From the perspective of orchestration engineers, NanoClaw’s simplicity is its greatest asset for building scalable, reliable pipelines. Traditional, bloated frameworks often introduce budget-draining overhead through complex microservices and message queues. NanoClaw’s container-first approach allows for the implementation of advanced AI technologies—including autonomous swarms—without the resource constraints and “technical debt” associated with 400,000-line legacy systems.Perhaps most critically, for security leaders, NanoClaw addresses the “multiple responsibilities” of incident response and organizational protection. In an environment where prompt injection and data exfiltration are evolving daily, a 500-line auditable core is far safer than a generic system trying to support every use case. “I recommend you send the repository link to your security team and ask them to audit it,” Cohen advises. “They can review it in an afternoon—not just read the code, but whiteboard the entire system, map out the attack vectors, and verify it’s safe”.Ultimately, NanoClaw represents a shift in the AI developer mindset. It is an argument that as AI becomes more powerful, the software that hosts it should become simpler. In the race to automate the enterprise, the winners may not be those who adopt the most features, but those who build upon the most transparent and secure foundations.
- Why enterprise IT operations are breaking — and how AgenticOps fixes themon February 11, 2026 at 5:00 am
Presented by Cisco AI agents are breaking traditional IT operations models, adding complexity, data silos, and fragmented workflows. DJ Sampath, Cisco’s SVP of AI Software and Platform, believes that AgenticOps is the solution: a new operational paradigm where humans and AI collaborate in real time to create efficiency, boost security, and allow for innovative technological applications.In a recent conversation with VentureBeat, Sampath outlined why current enterprise IT management is fundamentally breaking and what makes AgenticOps not just useful, but necessary for IT operations going forward.The breaking point of traditional IT operationsThe core problem plaguing enterprise IT today is fragmentation, Sampath said.”A lot of times inside of these enterprises, data is sitting across multiple different silos,” he explained. “For an operator to come in and start troubleshooting something, they have to go through many different dashboards, many different products, and that results in an increasing amount of time spent trying to figure out what is where before they can actually get to the root cause of an issue.”This challenge is about to intensify dramatically. As AI agents become ubiquitous within enterprises, the complexity will multiply exponentially. “Every single person is going to have at least 10 or more agents that are working on their behalf doing different types of things,” Sampath said. “This problem is only going to be tenfold, if not a hundredfold worse when you start to think about what’s really happening with the inclusion of agents.”Three core principles of AgenticOpsTo address these challenges, Cisco has developed its AgenticOps capabilities around three fundamental design principles that Sampath believes must be true for this new operational model to succeed.First, unified data access across silos. The platform must bring together disparate data sources: network data, security data, application data, and infrastructure data. “Bringing all of that stuff together is going to be incredibly important so that the agents that you are deploying to do work on your behalf can seamlessly connect the dots across the board,” Sampath said.Second, multiplayer-first design. AgenticOps must be fundamentally collaborative from the ground up, enabling IT operations, security operations, network operations teams — and agents — to work together seamlessly. “When you bring the IT ops person, the SecOps person, the NetOps person all together, you can troubleshoot and debug issues a whole lot faster than if you’re working in silos and copy pasting things back and forth,” he explained. “It’s humans and agents working together in a synchronous environment.”Third, purpose-built AI models. While general-purpose AI models excel at broad tasks, specialized operations require models trained for specific domains. “When you start to go into specializations, it becomes really important for these models to understand very specific things like network configuration or thread models that you care about and needs to be able to reason about that,” he said.How Cisco operationalizes AgenticOps across the enterprise stackCisco’s approach unites telemetry, intelligence, and collaboration into a single coherent platform. Cisco AI Canvas is an operations workspace that replaces multiple dashboards with a generative UI and a unified collaborative experience. Within AI Canvas, operators can use natural language to delegate actions to agents — pulling telemetry, correlating signals, testing hypotheses, and executing changes — while maintaining human-in-the-loop control.The reasoning capabilities come from Cisco’s Deep Network Model, trained on over 40 years of operational data including CCIE expertise, production telemetry, Cisco’s Technical Assistance Center (TAC), and Customer Experience (CX) insights. This purpose-built model delivers domain-specific intelligence that general-purpose models cannot match.Cisco’s platform spans campus, branch, cloud, and edge environments, allowing agents to consume telemetry across the entire ecosystem at machine speed, including Meraki, ThousandEyes, and Splunk. With MCP servers implemented across Cisco products, agents gain standardized access to tools and data without custom integration work.How fragmented reporting data undermines IT troubleshootingThe traditional approach to IT troubleshooting involves raising tickets and piecing together fragmenting information across multiple systems. “People take screenshots. Sometimes it’s in Post-it notes,” Sampath said. “All of this information stays in completely different channels so it becomes really hard for somebody to start collecting them together.”Cisco AI Canvas addresses this by giving teams one shared, real-time workspace for the work at hand — so context doesn’t get scattered across chats, tickets and screen shares. Teams can collaborate live, escalate instantly, and contribute context (such as screenshots and notes) alongside the agent’s generated charts and graphs. But the real power emerges when AI agents join these collaborative sessions. “The machines are constantly learning from these human to machine interactions,” Sampath explained. “When you see that same problem happen again, you are that much faster in responding because the machines can assist you.”This creates a virtuous cycle of continuous improvement, where the agent asks if you’d like to continue using the same approach as last time, for example, and you’re able to hand over more work to the agent. And the time spent debugging gets compressed as the system learns and accelerates future responses.Security as an AI acceleratorHistorically security has been considered a roadblock to adoption and even innovation. But with the right guardrails, organizations can confidently deploy AI at scale, and even accelerate it. Employees have already experienced the productivity gains of tools like ChatGPT and want similar capabilities within their enterprise environments. When organizations can detect personally identifiable information, prevent prompt injection attacks, and maintain proper data governance, they can unlock and unleash the AI adoption inside of the enterprise in a fundamentally different fashion.The identity layer required for cross-domain AgenticOpsCross-domain data access presents one of the most complex challenges in AgenticOps implementation. Cisco’s strategic acquisitions, particularly Splunk, position the company to address this, unifying data across traditionally disconnected systems. But bringing data together is only half the battle, since who has access to what data becomes vitally important.Cisco is evolving its Duo platform beyond multi-factor authentication to serve as a comprehensive identity provider, with robust identity and access management baked into the platform from the beginning, not bolted on as an afterthought. “We’re investing in identity as a very core pillar of how these agents are going to be able to pull data from different data sources with the right authorization in mind,” explains Sampath. “Should this agent have access to this type of data? Should you be correlating these types of data together to be able to solve a problem?” Humans in the loop, but at a higher levelAs AI agents become more autonomous, the role of humans will evolve rather than disappear. “We’re always going to have humans in the loop,” Sampath said. “What you’re going to see is the complexity of the tasks that are being performed are going to be a lot more involved.”Take coding as an example, which today can be entirely agentic. The human role has shifted from manual coding, or even tab completion, to asking an agent to create code wholesale, and then verifying that it meets requirements before merging it into the codebase. This pattern will repeat across IT operations, with humans focusing on higher-level decision-making while agents handle execution. Importantly, rollback capabilities ensure that even autonomous actions can be reversed if needed.Why waiting for AI to ‘settle down’ is the wrong moveFor CIOs and CTOs, the message is clear: don’t wait. “A lot of folks are in this holding pattern of waiting and watching,” Sampath said. “They’re waiting for AI to settle down before they make some of their decisions. And I think that is the wrong way to think about this. A partnership with the right groups of people, with the right sets of vendors, is going to help you go a whole lot faster, as opposed to trying to just stay on the fence, trying to figure out what’s right and what’s wrong.”Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
- OpenAI upgrades its Responses API to support agent skills and a complete terminal shellby carl.franzen@venturebeat.com (Carl Franzen) on February 10, 2026 at 10:25 pm
Until recently, the practice of building AI agents has been a bit like training a long-distance runner with a thirty-second memory. Yes, you could give your AI models tools and instructions, but after a few dozen interactions — several laps around the track, to extend our running analogy — it would inevitably lose context and start hallucinating. With OpenAI’s latest updates to its Responses API — the application programming interface that allows developers on OpenAI’s platform to access multiple agentic tools like web search and file search with a single call — the company is signaling that the era of the limited agent is waning. The updates announced today include Server-side Compaction, Hosted Shell Containers, and implementing the new “Skills” standard for agents.With these three major updates, OpenAI is effectively handing agents a permanent desk, a terminal, and a memory that doesn’t fade and should help agents evolve furhter into reliable, long-term digital workers.Technology: overcoming ‘context amnesia’ The most significant technical hurdle for autonomous agents has always been the “clutter” of long-running tasks. Every time an agent calls a tool or runs a script, the conversation history grows. Eventually, the model hits its token limit, and the developer is forced to truncate the history—often deleting the very “reasoning” the agent needs to finish the job.OpenAI’s answer is Server-side Compaction. Unlike simple truncation, compaction allows agents to run for hours or even days. Early data from e-commerce platform Triple Whale suggests this is a breakthrough in stability: their agent, Moby, successfully navigated a session involving 5 million tokens and 150 tool calls without a drop in accuracy.In practical terms, this means the model can “summarize” its own past actions into a compressed state, keeping the essential context alive while clearing the noise. It transforms the model from a forgetful assistant into a persistent system process.Managed cloud sandboxesThe introduction of the Shell Tool moves OpenAI into the realm of managed compute. Developers can now opt for container_auto, which provisions an OpenAI-hosted Debian 12 environment.This isn’t just a code interpreter: it gives each agent its own full terminal environment pre-loaded with:Native execution environments including Python 3.11, Node.js 22, Java 17, Go 1.23, and Ruby 3.1.Persistent storage via /mnt/data, allowing agents to generate, save, and download artifacts.Networking capabilities that allow agents to reach out to the internet to install libraries or interact with third-party APIs.The Hosted Shell and its persistent /mnt/data storage provide a managed environment where agents can perform complex data transformations using Python or Java without requiring the team to build and maintain custom ETL (Extract, Transform, Load) middleware for every AI project. By leveraging these hosted containers, data engineers can implement high-performance data processing tasks while minimizing the “multiple responsibilities” that come with managing bespoke infrastructure, removing the overhead of building and securing their own sandboxes. OpenAI is essentially saying: “Give us the instructions; we’ll provide the computer.”OpenAI’s Skills vs. Anthropic’s SkillsBoth OpenAI and Anthropic now support “skills,” instructions for agents to run specific operations, and have converged on the same open standard — a SKILL.md (markdown) manifest with YAML frontmatter.A skill built for either can theoretically be moved to VS Code, Cursor, or any other platform that adopts the specificationIndeed, the hit new open source AI agent OpenClaw adopted this exact SKILL.md manifest and folder-based packaging, allowing it to inherit a wealth of specialized procedural knowledge originally designed for Claude. This architectural compatibility has fueled a community-driven “skills boom” on platforms like ClawHub, which now hosts over 3,000 community-built extensions ranging from smart home integrations to complex enterprise workflow automations.This cross-pollination demonstrates that the “Skill” has become a portable, versioned asset rather than a vendor-locked feature. Because OpenClaw supports multiple models — including OpenAI’s GPT-5 series and local Llama instances — developers can now write a skill once and deploy it across a heterogeneous landscape of agents. But the underlying strategies of OpenAI and Anthropic reveal divergent visions for the future of work.OpenAI’s approach prioritizes a “programmable substrate” optimized for developer velocity. By bundling the shell, the memory, and the skills into the Responses API, they offer a “turnkey” experience for building complex agents rapidly.Already, enterprise AI search startup Glean reported a jump in tool accuracy from 73% to 85% by using OpenAI’s Skills framework.By pairing the open standard with its proprietary Responses API, the company provides a high-performance, turnkey substrate. It isn’t just reading the skill; it is hosting it inside a managed Debian 12 shell, handling the networking policies, and applying server-side compaction to ensure the agent doesn’t lose its way during a five-million-token session. This is the “high-performance” choice for engineers who need to deploy long-running, autonomous workers without the overhead of building a bespoke execution environment.Anthropic, meanwhile, has focused on the “expertise marketplace.” Their strength lies in a mature directory of pre-packaged partner playbooks from the likes of Atlassian, Figma, and Stripe. Implications for enterprise technical decision-makersFor engineers focused on “rapid deployment and fine-tuning,” the combination of Server-side Compaction and Skills provides a massive productivity boostInstead of building custom state management for every agent run, engineers can leverage built-in compaction to handle multi-hour tasks.Skills allow for “packaged IP,” where specific fine-tuning or specialized procedural knowledge can be modularized and reused across different internal projects.For those tasked with moving AI from a “chat box” into a production-grade workflow—OpenAI’s announcement marks the end of the “bespoke infrastructure” era.Historically, orchestrating an agent required significant manual scaffolding: developers had to build custom state-management logic to handle long conversations and secure, ephemeral sandboxes to execute code.The challenge is no longer “How do I give this agent a terminal?” but “Which skills are authorized for which users?” and “How do we audit the artifacts produced in the hosted filesystem?” OpenAI has provided the engine and the chassis; the orchestrator’s job is now to define the rules of the road.For security operations (SecOps) managers, giving an AI model a shell and network access is a high-stakes evolution. OpenAI’s use of Domain Secrets and Org Allowlists provides a defense-in-depth strategy, ensuring that agents can call APIs without exposing raw credentials to the model’s context.But as agents become easier to deploy via “Skills,” SecOps must be vigilant about “malicious skills” that could introduce prompt injection vulnerabilities or unauthorized data exfiltration paths.How should enterprises decide?OpenAI is no longer just selling a “brain” (the model); it is selling the “office” (the container), the “memory” (compaction), and the “training manual” (skills). For enterprise leaders, the choice is becoming clear:Choose OpenAI’s Responses API if your agents require heavy-duty, stateful execution. If you need a managed cloud container that can run for hours and handle 5M+ tokens without context degradation, OpenAI’s integrated stack is the “High-Performance OS” for the agentskills.io standard.Choose Anthropic if your strategy relies on immediate partner connectivity. If your workflow centers on existing, pre-packaged integrations from a wide directory of third-party vendors, Anthropic’s mature ecosystem provides a more “plug-and-play” experience for the same open standard.Ultimately, this convergence signals that AI has moved out of the “walled garden” era. By standardizing on agentskills.io, the industry is turning “prompt spaghetti” into a shared, versioned, and truly portable architecture for the future of digital work.Update Feb. 10, 6:52 pm ET: this article has since been updated to correct errors in an earlier version regarding the portability of OpenAI’s Skills compared to Anthropic’s. We regret the errors.
AWS News Blog Announcements, Updates, and Launches
- AWS Weekly Roundup: Claude Opus 4.6 in Amazon Bedrock, AWS Builder ID Sign in with Apple, and more (February 9, 2026)by Sébastien Stormacq on February 9, 2026 at 8:42 pm
This week’s roundup covers launches across compute, networking, security, and AI. Amazon EC2 introduces new C8id, M8id, and R8id instances powered by custom Intel Xeon 6 processors. AWS Network Firewall announces price reductions. Amazon DynamoDB global tables now support replication across multiple AWS accounts for improved resiliency and workload isolation. On the security front, AWS Builder ID now supports Sign in with Apple, AWS STS adds validation for identity provider claims and Amazon CloudFront introduces mutual TLS support for origins to enforce certificate-based authentication. For AI, Claude Opus 4.6—Anthropic’s most intelligent model—is now available in Amazon Bedrock, bringing industry-leading performance for agentic tasks and complex coding projects. Amazon Bedrock also adds structured outputs for consistent, machine-readable responses that adhere to your defined JSON schemas.
- Amazon EC2 C8id, M8id, and R8id instances with up to 22.8 TB local NVMe storage are generally availableby Channy Yun (윤석찬) on February 4, 2026 at 10:31 pm
AWS launches Amazon EC2 C8id, M8id, and R8id instances backed by NVMe-based SSD block-level instance storage physically connected to the host server. These instances offer 3 times more vCPUs, memory, and local storage with up to 22.8TB of local NVMe-backed SSD block-level storage.
- AWS IAM Identity Center now supports multi-Region replication for AWS account access and application useby Channy Yun (윤석찬) on February 3, 2026 at 7:13 pm
AWS IAM Identity Center now supports multi-Region replication of workforce identities and permission sets, enabling improved resiliency for AWS account access and allowing applications to be deployed closer to users while meeting data residency requirements.
