If you find our site valuable and helpful, consider supporting us or sponsoring one of our curations.
Source/Link | Description |
|---|---|
Information Week - A practical guide to controlling AI agent costs before they spiral | For AI agents, non-determinism has the effect of making it virtually impossible to anticipate exactly how an agent will fulfill a request -- or even to assume that the way it completed a task historically will continue to be the way it does so in the future. By extension, token costs, infrastructure resource consumption rates and agent maintenance requirements may also vary. |
SiliconANGLE - Identity theft becomes the new perimeter as attackers bypass security defenses | In 2026, attackers are now abusing valid credentials and trusted integrations to move through systems undetected instead of relying on malware or exploiting software vulnerabilities. A key driver of the trend is a significant rise in infostealer malware, including families such as LummaC2. Infostealer tools harvest browser-stored passwords, session cookies and authentication tokens before packaging them into data sets that are sold on underground marketplaces to other threat actors. |
Interviews with analysts, CIOs, and AI platform and governance leaders point to a consistent pattern. The problem is not that AI fails technically. It’s that enterprises are applying legacy budgeting, operating, and accountability models to a technology whose economics behave very differently. As a result, ROI erodes not because AI stops working, but because organizations lose the ability to explain, defend, and prioritize it. | |
Codex is a coding assistant offered as part of ChatGPT that allows developers to interact directly with code repositories by issuing prompts that trigger automated tasks such as code generation, reviews and pull requests. The tasks run inside managed container environments that clone repositories and authenticate using short-lived GitHub OAuth tokens, creating a useful but sensitive execution layer. | |
For cybersecurity leaders, this lesson is particularly relevant. Many organizations are currently evaluating AI as a tool to incrementally improve existing workflows—automating alerts, accelerating triage, or enhancing reporting. While these gains are valuable, they are not transformative. The real risk lies in failing to recognize how AI may fundamentally alter the nature of threats, attack surfaces, and defensive strategies. | |
However, the next generation of digital systems increasingly interacts with regional regulations, real-time decision loops, and the physical world in general. These factors do not tolerate distance well. Smart traffic systems can’t wait for a round-trip to distant cloud regions. Industrial control systems can’t halt operations because a wide-area link is congested. AI-driven video analytics becomes costly and inefficient when every frame must be sent back to a centralized platform for inference. In these environments, it matters where the data is created and processed and where decisions are made. | |
Computerworld - AI regulations are already out of date — IT leaders need to think ahead | Pope said something similar during the AI in the Age of Regulation discussion. No federal regulations will come out of the United States anytime soon, and states are publishing their own rules, she said. California, for instance, is focused on transparency, watermarking, and how AI will affect individuals and groups. |
InfoWorld - Anthropic throttles Claude subscriptions to meet capacity | The rationale here is that by accelerating how quickly users hit their session limits within these windows, Anthropic is effectively redistributing access to prevent system overloads while still preserving overall weekly usage quotas. |
ZDNet - 5 security tactics your business can't get wrong in the age of AI - and why they're critical | Panayi said the multifaceted nature of AI cybersecurity means professionals should expect new roles and responsibilities to emerge, with people sharing knowledge and swapping between teams to create a more powerful approach. |
VentureBeat - Oracle converges the AI data stack to give enterprise agents a single version of truth | Enterprise data teams moving agentic AI into production are hitting a consistent failure point at the data tier. Agents built across a vector store, a relational database, a graph store and a lakehouse require sync pipelines to keep context current. Under production load, that context goes stale. |
SiliconANGLE - RSAC 2026: AI hype meets operating model reality | Organizations are still struggling to consolidate the sprawl of tools in their security stacks and at the same time apply zero-trust principles. To avoid AI becoming yet another layer, organizations must tie AI to clear outcomes and integrate intelligence into operating processes. Enterprise Technology Research survey data captures the challenge. At least 90% of organizations say they’re leveraging AI somewhere in their security stack, but 75% are applying AI to less than 10% of their security portfolio. |
Computerworld - OpenAI’s Foundation play reframes the AI roadmap for IT leaders | It’s an important development that enterprises can learn from, particularly in legacy environments that still hold a ‘don’t share your data’ mindset, Jackson noted. Enterprises are still siloed or protective of their data, and often don’t look at it holistically, even across internal departments. |
Strong winds are buffeting the tightrope and there is legitimate concern that protection of global networks could fall into the digital abyss. At this week’s gathering of cybersecurity professionals for RSAC in San Francisco, a steady drumbeat of keynotes and side sessions offered evidence that threat actors have not only adopted AI, but they are having success in using autonomous technology to fuel identity-based attacks, large-scale denials of service, and poisoning of the software supply chain. | |
Computerworld - Microsoft backtracks on Copilot Chat access in M365 apps | Microsoft is set to remove Copilot Chat access within Microsoft 365 apps such as Word, Excel, and PowerPoint for large M365 commercial customers starting April 15 — a “mystifying backtrack,” according to one technology industry analyst. |
SiliconANGLE - The $1T infrastructure war: How Nvidia is replatforming the agentic era | The Agent Toolkit isn’t about owning the “intelligence” (the AI models); it’s about owning the infrastructure beneath every enterprise agent. Whether you’re running GPT-4, Claude or Llama, Nvidia wants to be the plumbing. Not the brain — the substrate. |
Computerworld - Google targets AI inference bottlenecks with TurboQuant | By compressing these workloads more aggressively without affecting output quality, TurboQuant could allow developers to run more inference jobs on existing hardware and ease some of the cost pressure around deploying large models. |
Computerworld - HP will cram a 20-billion-parameter AI model into new AI PCs | Initial HP IQ on-device AI experiences include Ask IQ, which responds to both text and voice inputs; Analyze, which looks at personal files and generates summaries and actionable insights; Notes and Knowledge, which keeps track of interactions and organizes notes; and Meeting Agent, which records notes or captures ideas during meetings. HP says additional capabilities will roll out later this year. |
Forbes - Why AI Cyberattacks Have Made Your Software Security Strategy Obsolete | AI now enables adversaries to scan millions of lines of code in minutes, identify exploitable patterns, generate novel attack vectors, and adapt in real time when initial attempts fail. Phishing campaigns that once required weeks of social engineering now deploy at scale with personalized precision |
SiliconANGLE - Emma Technologies unifies cloud infrastructure governance for legacy IT environments | The challenge of bringing all of these brownfield environments into a unified management plane is immense, because everything has to be rebuilt from scratch, which requires enormous resources. As a result, many enterprises choose to ignore governance, resulting in fragmented visibility over their sprawling infrastructure estates. |
VentureBeat - Cloudflare’s new Dynamic Workers ditch containers to run AI agent code 100x faster | For enterprise technical decision makers, that is the bigger story. Cloudflare is trying to turn sandboxing itself into a strategic layer in the AI stack. If agents increasingly generate small pieces of code on the fly to retrieve data, transform files, call services or automate workflows, then the economics and safety of the runtime matter almost as much as the capabilities of the model. Cloudflare’s pitch is that containers and microVMs remain useful, but they are too heavy for a future where millions of users may each have one or more agents writing and executing code constantly. |
ZDNet - 5 ways to harden your network against the new speed of AI attacks | Modern enterprise networks are widely distributed and can hand off tasks to partners via software-as-a-service. The bad guys are doing the same thing, Mandiant reports, using a "division of labor" model, in which one group uses low-impact techniques like malicious advertisements or fake browser updates to gain access to a network, then handing off the compromised target to a secondary group for hands-on access. |
MolmoWebMix, the accompanying dataset, includes 30,000 human task trajectories across more than 1,100 websites, 590,000 individual subtask demonstrations and 2.2 million screenshot question-answer pairs — which Ai2 describes as the largest publicly released collection of human web-task execution ever assembled. | |
If you look at Microsoft as a collection of product lines, it is easy to conclude that Windows 11 and Azure occupy different universes. One is a client operating system that has irritated its users, confused administrators, and pushed hardware refresh cycles in ways many customers did not want. The other is a hyperscale cloud platform selling compute, storage, data services, and AI infrastructure to enterprises. On paper, these are different businesses. In practice, they are part of the same trust system. | |
InfoWorld - New ‘StoatWaffle’ malware auto‑executes attacks on developers | According to NTT Security findings, the malware marks an evolution from the long-running campaign’s user-triggered execution to a near-frictionless compromise embedded directly in developer workflows. Attackers are using blockchain-themed project repositories as decoys, embedding a malicious VS Code configuration file that triggers code execution when the folder is opened and trusted by the victim. |
SiliconANGLE - Solink upgrades VerifEye platform to streamline global security operations centers | The platform centralizes data across departments to allow organizations to monitor access control, detect suspicious transaction patterns and identify operational disruptions such as service delays or unusual traffic flows. Operator actions are also recorded in an audit trail to support compliance and insurance requirements.. |
Computerworld - Zoom sees human conversation as its edge in the agentic AI era | As AI agents become better at acting autonomously on behalf of users, human interactions could shift away from applications like Zoom to those agents. In that scenario, collaboration software apps risk becoming the underlying infrastructure rather than the primary interface, a shift that recently prompted concerns about a broader “SaaS-pocalypse” following the launch of AI agent tools such as Anthropic’s Claude Cowork. |
ZDNet - 3 ways Cisco's DefenseClaw aims to make agentic AI safer | DefenseClaw is the "operational layer" for agentic security that has been missing, according to Cisco's head of AI software, DJ Sampath. It is a tool for oversight that will "keep a claw governed," he wrote in a blog post. "That's zero to governed claw in under five minutes." |
If your docs are not controlled in code, how can you automate them? Sure, AI Helps, but AI is no so good at generating Code, or rather, LLMs are so good at generating text that not taking advantage of this paradigm is a mistake. | |
ZDNet - A chief AI officer is no longer enough - why your business needs a 'magician' too | There's a lot of debate about who should be responsible for ensuring the business makes the most out of generative AI. Some experts suggest the CIO should oversee this crucial role, while others believe the responsibility should lie with a chief data officer. |
AI Supremacy Substack - Cursor's Wild Trajectory to being a Vibe Working Leader | Cursor was founded in 2022 by 4 MIT graduates—Michael Truell, Aman Sanger, Sualeh Asif, and Arvid Lunnemark. Now in 2026, Cursor makes about 60% of its revenue from Enterprise customers where large engineering organizations (like those at Nvidia, Uber, and Shopify) transitioned from pilot programs to full-scale deployments. Anysphere is just one year younger than Anthropic and by far the most promising AI coding startup getting into Enterprise autonomous agents. |
TechTalks Substack - AI won't kill SaaS, but major shifts are coming | A closer look at the very companies building these revolutionary models reveals a different reality. The leading artificial intelligence laboratories still rely heavily on established SaaS products to run their daily operations (both CEOs of OpenAI and Anthropic have been on record saying their organization uses Slack). They have access to the most advanced code-generation tools on the planet, yet they continue to pay for off-the-shelf software. They do this because enterprise software involves much more than generating a functional user interface. |
ZDNet - Is your AI agent a security risk? NanoClaw wants to put it in a virtual cage | This will be the first time a claw-based AI agent can be deployed in this manner, and according to the two organizations, it will take only one command to launch. If a user summons NanoClaw, each agent task is isolated in a Docker container running with Docker Sandboxes. |
Emerging from an open beta, the tool utilizes a "dynamic pruning algorithm" to maintain context in large codebases while scaling output to enterprise complexity. Co-founded by Kiran and Mihir Chintawar in 2024, the company aims to bridge the global engineering shortage by positioning Slate as a collaborative tool for the "next 20 million engineers" rather than a replacement for human developers. | |
While Amazon Bedrock helps you build and scale generative AI applications, Amazon Bedrock AgentCore provides an enterprise-grade infrastructure and operations layer for deploying and managing AI agents at scale. AgentCore itself is completely agnostic about models, frameworks, and integrations, although its starter kit CLI only supports the most prominent of these. | |
Computerworld - Data mining? Old servers could become new source of rare earths | For enterprises themselves, he added, “the implications are primarily economic and operational rather than geopolitical. The ability to capture value from retired hardware depends heavily on how organizations manage the end of life phase of their infrastructure lifecycle. Many companies still treat hardware retirement as a simple disposal exercise. Mixed equipment is often shipped to recyclers with little separation between different component types. In those scenarios most of the recoverable value disappears.” |
The new offering is built natively into the NinjaOne platform and brings together artificial intelligence-driven real-time vulnerability assessment, patch confidence scoring and remediation to allow organizations to proactively fix vulnerabilities, minimize mean time to remediate, and reduce time spent vulnerable. | |
Computerworld - Amazon finds out AI programming isn’t all it’s cracked up to be | The root cause was that AI was effectively treated as an extension of a human operator and granted operator‑level permissions. That’s just stupid. You never give someone —or something — system administration privileges unless they absolutely need it and you completely trust them. Neither was true in this case. So, it was that this combination of high privileges and no supervision blew up. |
ZDNet - After using MacBook Neo, it's clear Windows needs to rethink its PC strategy (and fast) | Apple's new MacBook Neo inserts a wedge into the budget laptop market. It's a product category traditionally dominated by Windows PCs, and Microsoft has been quite comfortable in this space for a long time -- its only real competitor being Chromebooks. |
The enterprise launch arrives barely two weeks after Computer debuted for consumers, where it triggered what the company describes as a viral moment: users on social media demonstrated the agent building Bloomberg Terminal-style financial dashboards, replacing six-figure marketing tool stacks in a single weekend, and automating workflows that previously required dedicated teams. Perplexity says more than 100 enterprise customers messaged the company over a single weekend demanding access. | |
Released today, Version 8.7 focuses on reducing the operational burden of large-scale file environments while giving information technology teams greater control over distributed file systems that support global collaboration, the company said. The target market is organizations whose intellectual property resides in large project files, initially architecture, engineering and construction firms. | |
Computerworld - Zoom expands agentic AI platform to automate enterprise workflows | Zoom also introduced new capabilities across its communications and customer experience tools, including Zoom Phone Mobile, SMS support for the Zoom Virtual Agent AI Receptionist, AI Expert Assist 3.0 for its contact center platform, natural-language workflow orchestration for customer interactions, and new meeting security enhancements. |
SiliconANGLE - ORO Labs raises $100M to expand procurement orchestration platform | The platform provides a centralized intake and orchestration layer that connects employees, procurement teams, finance systems and suppliers, allowing organizations to route purchasing requests, approvals and compliance checks through a single workflow framework. ORO’s platform is designed to manage procurement processes across distributed enterprise environments while maintaining policy enforcement and audit tracking. |
Computerworld - Storage vendor offers a real guarantee — but check out those fine-print exceptions | Let’s start with the guarantee, which relates to customers using its Artesca storage line: “A $100,000 financial guarantee to customers if an external cyberattack destroys or encrypts data stored immutably on Artesca. The program applies to every Artesca customer without requiring the purchase of additional services. As long as organizations keep Artesca up to date and protect data using Object Lock in compliance mode, they qualify for the guarantee.” |
ZDNet - 5 security tactics your business can't get wrong in the age of AI - and why they're critical | Lovelock told ZDNET that one key issue is that organizations can't yet benefit from access to measurable, definable, and certifiable AI safety, meaning end-user security requirements are unlikely to be met by many of their providers. |
Computerworld - It looks like Macs are becoming the value option | As a result, the number of people Apple can offer a Mac to is growing as rapidly as the product matrix. Future Ultra Macs will take that reach all the way up to the very, very top tiers currently served by furiously expensive PC workstations, while the Neo range (which I’m willing to bet gets a backlit keyboard and more memory next year) extends its hand all the way to students and general purpose computer users. |
VentureBeat - Anthropic and OpenAI just exposed SAST's structural blind spot with free tools | OpenAI launched Codex Security on March 6, entering the application security market that Anthropic had disrupted 14 days earlier with Claude Code Security. Both scanners use LLM reasoning instead of pattern matching. Both proved that traditional static application security testing (SAST) tools are structurally blind to entire vulnerability classes. The enterprise security stack is caught in the middle. |
The search giant and, increasingly, AI leader today announced a sweeping series of updates to its Gemini AI models embedded into Google Workspace — the productivity suite of cloud-based apps including Drive, Docs, Sheets, Slides, and more. They're being made available both to individual consumers and enterprises, though you'll need an AI Pro ($20 per month) or higher subscription plan for the former, and your enterprise will need to be enrolled in the "Gemini Alpha" program and have the features switched on by an administrator. | |
InfoWorld - Anthropic debuts Claude Marketplace to target AI procurement bottlenecks | Called Claude Marketplace, the platform currently has a limited set of partners, including Replit, Lovable Labs, GitLab, Snowflake, Harvey AI, and Rogo, offering tools across software development, legal workflows, financial analysis, and enterprise data operations, respectively. |
SiliconANGLE - Mend.io launches AI system prompt hardening solution to secure LLM instructions | Mend.io said its new system prompt hardening capability helps move security teams beyond ad hoc testing and manual red teaming to test LLM responses to attacks in a standardized framework for managing security. |
Computerworld - M365 Copilot gets its own version of Claude Cowork | The Microsoft 365 Copilot Wave 3 brings new agentic AI tools to create and edit documents, alongside the launch of an E7 price tier that bundles AI tools with M365 apps for $99 per user each month. Businesses should be wary of the limitations and risks related tousing Copilot Cowork, say analysts. |
SiliconANGLE - Exclusive: Virtana customizes its observability platform for AI workloads | The platform combines application telemetry with infrastructure-level data to automatically correlate performance issues across hybrid environments. Virtana said its approach identifies root causes more quickly and supports what it calls “system-level observability” rather than the code-centric monitoring used by many legacy APM platforms. |
ZDNet - AI is supercharging cloud cyberattacks - and third-party software is the most vulnerable | The report concludes that the best way to fight AI-powered attacks is with AI-augmented defenses: "This activity, along with AI-assisted attempts to probe targets for information and continued threat actor emphasis on data-focused theft, indicates that organizations should be turning to more automatic defenses." |
InfoWorld - How generative UI cut our development time from months to weeks | I lead and implemented an approach that exists somewhere in between. We specify a library of components and allowable layout patterns that define the constraints of our design system. The AI then chooses components from this library, customizes them based on context and lays them out appropriately for each unique user interaction. |
SiliconANGLE - Google enhances Docs, Sheets, Slides and Drive with deeper Gemini integration | With this recent update, Google introduced a new “Help me create” experience in Docs. Users can describe what they want to create and it will follow instructions and synthesize information by looking over Drive, Gmail, Chat and web sources to generate a fully formed draft. |
Computerworld - Apple’s new $599 MacBook Neo is a nightmare for Windows OEMs | “A watershed event,” said Asymco analyst Horace Dediu. “First Mac with a mobile processor and the end of the disruptive arc of mobile computing. From Motorola to Intel to Apple silicon M, now personal computing is an accessory to mobile computing. A sharp punctuation point.” |
SiliconANGLE - With $200M in funding, Eridu wants to break through the network wall holding back AI | Perkins said that the bandwidth, latency, power consumption and radix (the number of input/output ports) of existing network switches are tightly coupled to the silicon architecture they’re based on, which was designed for cloud data centers that are much smaller than today’s emerging AI factories. “This silicon architecture has fundamentally been the same for the last two decades and is only incrementally improved with a doubling of capacity every two to 2.5 years,” he said. “We believe that these incremental improvements leave a lot of performance on the table.” |
ZDNet - Why AI is both a curse and a blessing to open-source software - according to developers | At FOSDEM 2026 in Brussels, Belgium, Stenberg said that, until early 2025, roughly one in six security reports to cURL were valid. That's because, "in the old days, you know, someone actually invested a lot of time [in] the security report. There was a built-in friction here, but now there's no effort at all in doing this. The floodgates are open. Send it over." |
So, why the total absence of trust? Here’s the bad news. On the back of AI, cybercrime has become a global superpower, with an estimated $10.5 trillion coming from extortion, phishing, hacks, and ransomware – by my calculations, that is fifteen times the value of the global AI market. | |
Forbes - AI Agents Now Buy From Other AI Agents — What Leaders Must Know | This shift is already reshaping enterprise procurement, logistics and consumer planning at scale. Agentic systems operate in layered pipelines where one model's output becomes another model's input. When your planning agent selects a florist, it may already be transacting through a vendor agent that has pre-negotiated pricing with a supplier agent upstream. The speed and autonomy are extraordinary. The accountability gap, however, is just as significant. |
VentureBeat - Pentagon vendor cutoff exposes the AI dependency map most enterprises never built | A January 2026 Panorays survey of 200 U.S. CISOs put a number on the problem: Only 15% said they have full visibility into their software supply chains, up from just 3% a year ago. And 49% had adopted AI tools without employer approval, according to a BlackFog survey of 2,000 workers at companies with more than 500 employees; 69% of C-suite members said they were fine with it. |
Confluent's latest Confluent Intelligence features include support for both Anthropic's Model Context Protocol (MCP) and the Agent2Agent (A2A) protocol within Streaming Agents, plus a new multivariate anomaly detection capability. All technically credible additions. But the more difficult challenge is about whether enterprises have the data infrastructure, governance maturity, and organizational readiness to make agent coordination actually work. | |
The Juno platform was originally developed as a threat hunting platform capable of analyzing activity across both cloud-native and on-premises environments but is now being positioned as a broader strategic cybersecurity assistant. The platform analyzes telemetry from cloud infrastructure, containers and endpoints to help security teams identify threats, investigate incidents and understand attack paths across complex enterprise environments. | |
VentureBeat - Databricks built a RAG agent it says can handle every kind of enterprise search | Databricks set out to fix that with KARL, short for Knowledge Agents via Reinforcement Learning. The company trained an agent across six distinct enterprise search behaviors simultaneously using a new reinforcement learning algorithm. The result, the company claims, is a model that matches Claude Opus 4.6 on a purpose-built benchmark at 33% lower cost per query and 47% lower latency, trained entirely on synthetic data the agent generated itself with no human labeling required. That comparison is based on KARLBench, which Databricks built to evaluate enterprise search behaviors. |
Computerworld - Apple’s new $599 MacBook Neo is a nightmare for Windows OEMs | The company is openly targeting customers who want to shift from Windows to a better operating system with the hardware to match. A visit to the product pages on the Apple website offers a “Switch from PC to Mac” section where you’ll find help and answers to decide if the time is right to upgrade to Mac. |
ZDNet - The biggest AI threats come from within - 12 ways to defend your organization | Like Thor and Loki, or Batman and the Joker, the two foes constantly have to outpace and outmaneuver one another in what's shaping up to be a long, possibly never-ending arms race. (On a related note, AI developers like OpenAI have their own security arms race to contend with: the better that their models can protect against prompt injection attacks, the more cunning those attacks become.) |
SiliconANGLE - Agentic business intelligence startup WisdomAI shifts from insights to action | The company says it’s using artificial intelligence agents to tackle the “last mile” problem inherent in almost every modern data stack: translating that data into decisions. Until now, the decision-making has always been done by humans, who still have to switch between Excel spreadsheets and BI dashboards to gather all of the information they need to know what to do |
Computerworld - How vibe coding is reshaping software development, and what it breaks along the way | AI-powered “vibe coding” is moving from experimentation to real production software. But as developers and AI agents begin building side by side, enterprises face new questions around quality control, tech debt, team structure, and the future of junior engineers |
ZDNet - Will AI make cybersecurity obsolete or is Silicon Valley confabulating again? | To the rescue come the major creators of AI models, OpenAI, Anthropic, and Google. All three offer tools that could mitigate failures and security breaches in LLMs and the agentic programs built on top of them. |
InfoWorld - The right way to architect modern web applications | We saw it in the early 2000s, when server-rendered, monolithic applications were the default. We saw it again in the late 2000s and early 2010s, when the industry pushed aggressively toward rich client-side applications. And we saw it most clearly during the rise of single-page applications, which promised desktop-like interactivity in the browser but often delivered something else entirely: multi-megabyte JavaScript bundles, blank loading screens, and years of SEO workarounds just to make pages discoverable. |
Computerworld - What is digital employee experience — and why is it more important than ever? | Digital employee experience is a measure of how workers perceive and interact with the many digital tools and services they use in the workplace. It examines how employees feel about these technologies, including systems, software, and devices. |
Compensation follows leverage. In major markets, it’s common to see total annual compensation for experienced cloud architects exceed $200,000, particularly when the role includes broad platform scope, security accountability, and cross-domain influence. One good architect can keep a large organization out of trouble in ways that save far more than the cost of the role. | |
Computerworld - 3 Android theft protection additions you should absolutely activate | More than anything, though, no naughty Android app can just magically plop itself onto your phone and then access private info. Apps only appear if you explicitly install ’em — and even then, they’re only able to access sensitive data and areas of your device if you approve the permissions to permit that. |
ZDNet - Why enterprise AI agents could become the ultimate insider threat | Generative AI is moving from chatbot to autonomous actor. When agents can launch other agents, spend money, and modify systems, the line between productivity tool and insider threat disappears. |
SiliconANGLE - DeepKeep launches AI agent attack surface scanner to map enterprise risk | The release today includes AI Agent Scanner, which provides immediate visibility into what AI agents can access, which tools and data they interact with and where potential vulnerabilities exist to meet a pressing enterprise need as the AI agent attack surface grows. The solution performs robust attack surface scanning to map an agent’s entire threat landscape, identifying connected tools and their intents, data sources and potential vulnerabilities. |
VentureBeat - When AI lies: The rise of alignment faking in autonomous systems | Alignment faking usually happens when earlier training conflicts with new training adjustments. AI is typically “rewarded” when it performs tasks accurately. If the training changes, it may believe it will be “punished” if it does not comply with the original training. Therefore, it tricks developers into thinking it is performing the task in the required new way, but it will not actually do so during deployment. Any large language model (LLM) is capable of alignment faking. |
Computerworld - OAuth phishers make ‘check where the link points’ advice ineffective | Microsoft has warned that phishers are exploiting a built-in behavior of the OAuth authentication protocol to redirect victims to malware, using links that point to legitimate identity provider domains such as Microsoft Entra ID and Google Workspace. The links look safe but ultimately lead somewhere that isn’t. |
ZDNet - Why encrypted backups may fail in an AI-driven ransomware era | Think your encrypted backups are safe? AI-driven ransomware now infiltrates networks, corrupts recovery points, and silently targets backup systems before you ever realize your data protection strategy has failed. |
SiliconANGLE - Cloudflare warns AI and SaaS integrations are fueling industrial-scale cybercrime | The report provides various examples to back up its claims. In a campaign tracked as GRUB1, attackers compromised a trusted SaaS-to-SaaS connection and then used generative artificial intelligence to navigate complex enterprise platforms in real time. The actor turned a single integration into a multitenant breach with supply chain implications by identifying high-value database tables moments before accessing production environments. |
As I’ve argued, the real enterprise AI challenge is no longer training. It’s inference: applying models continuously to governed enterprise data, under real-world latency, security, and cost constraints. That shift matters because once inference becomes the steady-state workload of the enterprise, infrastructure that once seemed necessary but dull suddenly becomes strategic. | |
ZDNet - Rolling out AI? 5 security tactics your business can't get wrong - and why | The same capabilities that make AI useful also make it exploitable. In fact, the rate at which emerging technologies are advancing intensifies that uncomfortable reality by the minute. |
Information Week - Who really sets AI guardrails? How CIOs can shape AI governance policy | Somewhere between the requirements of government policy, the terms set by the vendor, the pressure of the customer and the guidance of the board, CIOs must chart a path that maximizes AI utility while protecting the business. While they cannot dictate the environment, they can make critical choices within it. |
InfoWorld - OpenAI launches stateful AI on AWS, signaling a control plane power shift | The company has announced that it will soon offer a stateful runtime environment in partnership with Amazon, built to simplify the process of getting AI agents into production. It will run natively on Amazon Bedrock, be tailored for agentic workflows, and optimized for AWS infrastructure. |
Out of the initial chaos came a clear lesson about the role of an AI coder. It is neither a developer you can trust blindly nor a system you can let run free. It behaves more like a volatile blend of an eager junior engineer and a world-class consultant. Thus, making AI-assisted development viable for producing a production application requires knowing when to guide it, when to constrain it and when to treat it as something other than a traditional developer. | |
InfoWorld - The browser is your database: Local-first comes of age | But an alternative is emerging. The idea is to embed a relational database directly in the browser, with a slice of the data, and let a synchronization (sync) engine keep everything consistent. The browser interacts with a local datastore that is synced to the server in the background. This means instant interactivity on the front end while maintaining symmetry with the back end. This next-generation browser has a more resilient state-of-record, not just a temporary cache. |
SiliconANGLE - Figma’s orchestration bet: Why MCP network effects redefine software defensibility | Figma isn’t just a design tool anymore. It’s a shared design system across engineering, product, marketing and increasingly nondesigners. Nearly 60% of Figma Make files are now created by nondesigners. More than 75% of customers use multiple Figma products. |
The Coinerella approach is to deliberately refuse to let the platform drift toward AWS and US-based hyperscalers, driven by practical considerations such as data residency, General Data Protection Regulation (GDPR) compliance, reducing concentration risk, and demonstrating the operational viability of European infrastructure. Leaders often talk about sovereignty until the first production incident, the first compliance review, or the first integration gap. Coinerella remains committed and is addressing the consequences. | |
ZDNet - Is Microsoft really spying on you with Windows telemetry? | But you know what? More than a decade later, people are still spreading those conspiracy theories. Microsoft is spying on you! Redmond is collecting mountains of personal data and using it for ... advertising, I guess? And the rise of AI means that there are even bigger rabbit holes to go down. |
And yes, I know, nobody is doing 1,800 meaningful commits. But that’s the point. The metric is already being gamed, and agents make gaming effortless. If your organization starts celebrating “commit velocity” in the agent era, you are not measuring productivity. You are measuring how quickly your team can manufacture liability. | |
Computerworld - US orders diplomats to push back on data sovereignty | At the same time, support for data sovereignty is growing, especially in Europe, where there are concerns about privacy, surveillance, and US dominance in AI and tech. The EU’s GDPR is mentioned in the document as an example of rules that the US considers unnecessarily restrictive. |
InfoWorld - 7 ways to tame multicloud chaos with generative AI | Standardizing on a single cloud infrastructure is much easier than pursuing a multicloud strategy. In a single-cloud environment, IT leaders can optimize skill sets, centralize data more easily, secure infrastructure with fewer tools, and gain many other operational benefits. Yet 89% of enterprises report they are pivoting to multicloud adoption. Reasons for choosing to operate across multiple clouds include mitigating risk, reducing service interruptions, and avoiding vendor lock-in. |
Which is why — for many of these organizations — the default lens for agents is frequently automation, not agency. Of eliminating people to reduce cost. Of deterministic workflows. Of eliminating rather than enabling judgement. Of detailed operations rather than delegated outcomes. | |
Computerworld - Anthropic targets core business systems with new Claude plug-ins | In a blog post, the company said new connectors are available for widely used enterprise platforms, including Google Workspace tools such as Calendar, Drive, and Gmail, as well as DocuSign, FactSet, MSCI, and LegalZoom, while partners such as Slack, LSEG, and S&P Global have built plug-ins for joint customers. |
Anthropic opened its virtual "Briefing: Enterprise Agents" event on Tuesday with a provocation. Kate Jensen, the company's head of Americas, told viewers that the hype around enterprise AI agents in 2025 "turned out to be mostly premature," with many pilots failing to reach production. "It wasn't a failure of effort, it was a failure of approach, and it's something we heard directly from our customers," Jensen said. | |
Computerworld - After OpenClaw backlash, Quill bets on security-by-design agentic AI | Naturally, though, enterprises and users may be concerned about how they can remain in control of their data. Addressing this, Quill is “local-first with options,” meaning transcription and speaker recognition run on-device and audio never leaves that environment. The agent never stores data, and enterprises have access to configurable endpoints to ensure zero exposure. |
Forbes - AI Rattles Cybersecurity Markets: What Anthropic’s Code Security Actually Does | Anthropic introduced Claude Code Security, an AI driven capability embedded into its Claude Code platform. Within hours, a broad set of cybersecurity equities declined sharply. The prevailing narrative formed quickly: AI is now replacing cybersecurity tools. |
ZDNet - Copilot quietly grabs your data from other Microsoft products now - here's how to opt out | Known as "Microsoft usage data," the setting lets Copilot refer to your data from Bing, MSN, Edge, and other Microsoft products that you've used, as spotted by Windows Latest. Accessible at the Copilot website and through the mobile app, the setting appears to be relatively new and is part of the Memory option in Copilot. This option allows the AI to recall your conversation history, any facts and instructions you share, and certain data from Microsoft products, all in an effort to personalize Copilot. |
Forbes - Anthropic Leans Into Enterprise With Managed Claude Cowork Plugins | Companies can build AI agents that adapt to their unique workflows, using the same tools they already trust, but are private and protected in the enterprise, with admin control. The goal for Anthropic is to move AI from being a peripheral tool to an integrated layer of business operations, where every department gets its own specialized assistant. |
Computerworld - What really caused that AWS outage in December? | The back-story was broken by the Financial Times, which reported the 13-hour outage was caused by a Kiro agentic coding system that decided to improve operations by deleting and then recreating a key environment. |
InfoWorld - Compromised npm package silently installs OpenClaw on developer machines | “I mean, they effectively turned OpenClaw into malware that EDR [endpoint detection and response ] isn’t going to stop,” said David Shipley of Beauceron Security. It is “deviously, terrifyingly brilliant.” |
SiliconANGLE - Veeam launches Agent Commander to tackle AI risk and reverse agent mistakes | Agent Commander addresses what Veeam calls the most critical gap in AI infrastructure today: trust. Veeam argues that as AI agents scale, data risk and AI risk have become the same problem and that an agent is only as trustworthy as the data it can see, access and act on. Added to the mix is that sensitive data is being fed into models and acted upon in ways no one approved nor is tracking. |
Our team wanted to get at the root of what IT leaders are thinking, so we surveyed CIOs and CTOs from around the globe. What we found was fascinating: 84% of IT leaders believe automation (the structured, governed execution of repetitive business processes that can be optimized to run without human intervention) must come first if AI is to succeed. Those with mature automation programs were more than twice as likely to describe their AI initiatives as transformational compared to peers still in the early stages. | |
The great promise of generative artificial intelligence was that it would finally clear our backlogs. Coding agents would churn out boilerplate at superhuman speeds, and teams would finally ship exactly what the business wants. The reality, as we settle into 2026, is far more uncomfortable. Artificial intelligence is not going to save developer productivity because writing code was never the bottleneck in software engineering. The true bottleneck is validation. Integration. Deep system understanding. Generating code without a rigorous validation framework is not engineering. It is simply mass-producing technical debt. | |
The Hacker News - Identity Prioritization isn't a Backlog Problem - It's a Risk Math Problem | In modern enterprises, identity risk is created by a compound of factors: control posture, hygiene, business context, and intent. Any one of these can perhaps be manageable on its own. The real danger is the toxic combination, when multiple weaknesses align and attackers get a clean chain from entry to impact. |
The guardrails Treasure Data built live upstream of the code itself. When any user connects to the CDP through Treasure Code, access control and permission management are inherited directly from the platform. Users can only reach resources they already have permission for. PII cannot be exposed. API keys cannot be surfaced. The system cannot speak disparagingly about a brand or competitor. | |
Computerworld - With ‘Frontier,’ OpenAI hopes to own the enterprise agent stack | Frontier is an “end-to-end” platform designed to help “enterprises build, deploy and manage AI agents,” according to OpenAI. It connects agents to core business systems — such as CRM, ERP, and data warehouses — and centralizes how these agents are configured, monitored and governed. |
Everpure is betting that the primary blocker to enterprise AI value is not model quality, compute access, or organizational readiness - it's data infrastructure. Specifically, it’s data fragmentation, poor governance, lack of provenance, and the inability to make data available at machine speed without violating access controls or sovereignty requirements. | |
VentureBeat - How attackers hit 700 organizations through CX platforms your SOC already approved | CX platforms process billions of unstructured interactions a year: Survey forms, review sites, social feeds, call center transcripts, all flowing into AI engines that trigger automated workflows touching payroll, CRM, and payment systems. No tool in a security operation center leader’s stack inspects what a CX platform’s AI engine is ingesting, and attackers figured this out. They poison the data feeding it, and the AI does the damage for them |
InfoWorld - T mistakes that escalate into serious cyber-risk | "A lot of companies go sideways because they don't appreciate the level of risk that lies with IT," Lyborg said. "If they don't do the basics of identity management, access and audit reviews, they can't see or react when something [is off]." |
VentureBeat - Shadow mode, drift alerts and audit logs: Inside the modern audit loop | Traditional software governance often uses static compliance checklists, quarterly audits and after-the-fact reviews. But this method can't keep up with AI systems that change in real time. A machine learning (ML) model might retrain or drift between quarterly operational syncs. This means that, by the time an issue is discovered, hundreds of bad decisions could already have been made. This can be almost impossible to untangle. |
What does this have to do with the current state of AI adoption? The basic message is that if you have an organization that is essentially greenfield, such as a start-up, they can tap into all the AI promises made for the tech. But the overwhelming majority of enterprises are brownfield and trying to modernize their systems with AI and keep the lights on is a major challenge. |