If you find our site valuable and helpful, consider a sponsorship or supporting us.

Article links on AI, LLMs, ChatGPT and more

Prev Next

If you find our site valuable and helpful, consider supporting us or sponsoring one of our curations.


Source/Link

Description

InfoWorld - Final training of AI models is a fraction of their total cost

This has been a concern of US AI companies for some time.  Google has already expressed concerns about intellectual property theft. And Anthropic has fingered MiniMax as a company that has sought to extract Claude’s capabilities to enhance its own offerings. It’s clear that any business looking to develop AI models is going to be committing to spend huge sums of money: The training is just a small part of it.

SiliconANGLE - Microsoft accelerates agentic automation with Copilot Cowork for complex workflows

Spataro said users simply tell Copilot Cowork what they’re trying to accomplish, and it will then go ahead and create a plan before immediately carrying out the necessary tasks to achieve that goal, reasoning across various Microsoft 365 applications and files. Human oversight is still present, though. As it’s working, humans will be able to monitor the agent’s progress, and step in to “steer” it in the right direction should it go off track, Spataro said.

VentureBeat - When product managers ship code: AI just broke the software org chart

For a meaningful class of tasks, it became faster to just build the thing than to describe what you wanted and wait for someone else to build it. Think about that for a second. Every modern software organization is structured around the assumption that implementation is the expensive part. When that assumption breaks, the org has to change with it.

Forbes - CNCF's Dapr Agents Tackles The Problem Most AI Frameworks Ignore

The Cloud Native Computing Foundation announced the general availability of Dapr Agents v1.0 at KubeCon Europe in Amsterdam this week, releasing a Python framework that prioritizes keeping AI agents alive through crashes and failures over making them smarter. Zeiss Vision Care validated the approach in a KubeCon keynote, showing how the framework powers a durable document extraction pipeline processing enterprise optical data at scale.

SiliconANGLE - Anthropic to launch new ‘Claude Mythos’ model with advanced reasoning features

Mythos came to light after the company accidentally left a CMS folder with 3,000 assets publicly accessible. The repository contained a draft version of a launch blog post for Claude Mythos. According to Fortune, the document indicates that the model will be pricier than the company’s existing algorithms.

VentureBeat - When AI turns software development inside-out: 170% throughput at 80% headcount

Qualitatively, looking at the business value, I actually see even higher uplift. One reason is that, as we started last year, our quality assurance (QA) team couldn’t keep up with our engineers' velocity. As the company leader, I wasn’t happy with the quality of some of our early releases. As we progressed through the year, and tooled our AI workflows to include writing unit and end-to-end tests, our coverage improved, the number of bugs dropped, users became fans, and the business value of engineering work multiplied.

SiliconANGLE - OpenAI introduces plugins for its Codex programming assistant

Plugins can also integrate Codex with external services using MCP servers. Developers may upload configuration files to customize how those MCP servers work. For example, an engineer could connect Codex to an MCP-powered development environment and specify what middleware should be pre-installed in the sandbox.

SiliconANGLE - Oracle’s new AI bet: Make the AI database the center of agentic workloads

At its latest showcase during the Oracle AI World Tour London 2026, the company made a calculated move in the agentic AI market. Instead of competing head-on in the model wars, Oracle is positioning the database as the center of gravity for enterprise agentic AI, effectively arguing that the future of AI won’t be determined by agents alone, but by where and how they interact with data.

Computerworld - Anthropic wins reprieve against US DoD ban, buying time for contractors to assess AI supply chains

Following the ruling, Anthropic issued a statement saying, “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

Buy the Rumor; Sell the News Substack - Closed Source vs Open Source AI: A Cage Fight Few People Understand

The capability spread hasn’t gone to zero. Frontier models still lead on the hardest tasks, such as complex agentic coding, multi-step tool chaining, and long-horizon workflows where reliability matters as much as raw intelligence. But the list of tasks where they meaningfully lead is getting shorter every quarter. And that distinction, between leading on benchmarks and leading on tasks people pay for, is everything

VentureBeat - How xMemory cuts token costs and context bloat in AI agents

Experiments show that xMemory improves answer quality and long-range reasoning across various LLMs while  cutting inference costs. According to the researchers, it drops token usage from over 9,000 to roughly 4,700 tokens per query compared to existing systems on some tasks.

SiliconANGLE - Bland launches Norm to help teams build production-ready voice agents in minutes

The system would then go to work and automatically generate the prompt, persona, agent, pathways, validation conditions, extraction rules and API integrations. The engineering team can then thoroughly test the agent before deploying it live.

Computerworld - Apple’s AI endgame: Why waiting for Siri could make it a winner

Bloomberg tells us Apple has a big plan for iOS 27 with a massive Siri revamp, turning it into a full-scale chatbot like ChatGPT or Gemini. The update might extend to Siri gaining its own dedicated Siri app, deeper system integration, and potentially replacing or integrating Spotlight search.  The idea is that Siri will be as good as any other chatbot you use, but will also be equipped with information personal to you — only on the device, private, and secure (insofar as there is any privacy and security anymore).

Forbes - What Sora’s Rise And Sudden Fall Means For OpenAI, Disney And AI Video

That sudden ending leaves two contrasting stories sitting side by side. One sees OpenAI reallocating resources toward coding, enterprise products, world simulation and robotics, where the strategic and financial upside is clearer. The other sees a company that overreached, launched a video product before it had a stable commercial and ethical footing, then pulled back once the costs, controversies and distractions became too difficult to justify. Reuters has reported that OpenAI is refocusing around coding and business users, while Axios says the Sora research team will continue “to focus on world simulation research to advance robotics that will help people solve real-world, physical tasks.”

VentureBeat - Anthropic’s Claude can now control your Mac, escalating the fight to build AI agents that actually do work

The update, available immediately as a research preview for paying subscribers, transforms Claude from a conversational assistant into something closer to a remote digital operator. It arrives inside both Claude Cowork, the company's agentic productivity tool, and Claude Code, its developer-focused command-line agent. Anthropic is also extending Dispatch — a feature introduced last week that lets users assign Claude tasks from a mobile phone — into Claude Code for the first time, creating an end-to-end pipeline where a user can issue instructions from anywhere and return to a finished deliverable.

SiliconANGLE - OpenAI says it’s pulling the plug on Sora, its generative AI video creation tool

Co-founder and Chief Executive Sam Altman revealed the decision today, telling staff in an email that the company will wind down all products that use its video models. It’s dropping both the consumer app and also a version for developers. ChatGPT will no longer support video functions either. In a social media post, OpenAI thanked Sora’s users, saying it understands that the news will be “disappointing.”

InfoWorld - OpenAI’s desktop superapp: The end of ChatGPT as we know it?

“We realized we were spreading our efforts across too many apps and stacks, and that we need to simplify our efforts,” the Journal reported that day, citing Simo’s address to the employees. “That fragmentation has been slowing us down and making it harder to hit the quality bar we want.” At the same meeting, Simo outlined the commercial imperative plainly: “Our opportunity now is to take those 900 million users and turn them into high-compute users. We’ll do that by transforming ChatGPT into a productivity tool.”

ai goes to college Substack - What 81,000 People Told Anthropic About AI (And Why Higher Ed Should Be Paying Attention)

Many others similarly started the interview talking about productivity, but after Anthropic Interviewer asked about their underlying hope behind it—what realizing this vision would enable for them—other priorities surfaced. It wasn’t about doing better work, but increasing their quality of life outside of it. Using AI to automate e-mails became, in actuality, a desire to spend more time with family.

The Economy of Algorithms Substack - Stop Asking AI for Answers - by Marek Kowalkiewicz

Ninety agents reported that volume grew by about 6% per year. Sixty reported that volume declined by about 5% per year. Within each group, the agents agreed almost perfectly, within 0.25% and 0.11%, respectively. Precise. Consistent. And completely opposing one another. In a consulting firm, this would be two confident partners arguing at a board meeting.

Education Disrupted: Teaching and Learning in An AI World Substack - The Last Entrepreneurs Standing: Why Speech & Debate Kids Will Thrive in the Age of AI

And if that’s where the world is headed, then speech and debate may be the single best training ground for the next generation of founders. Not because it teaches you to talk fast (though it does), but because it builds the cognitive architecture that entrepreneurship demands. Pair that foundation with fluency in multi-agent AI systems, and you have something genuinely powerful: a person who can think, persuade, adapt, and orchestrate — whether the team around them is human, artificial, or both.

Cyborgs Writing Substack - The Techne Behind Agent Skills - by Lance Cummings

Task information is exactly right for execution steps. But a skill is also a description of what the output is supposed to be, a set of behavioral constraints, a map of how this task connects to the larger workflow, and the metadata that triggers the skill in the first place.

FullStack HR Substack - "Treat AI as if it were human because, in many ways, it behaves like one."

His concept of the “Never Normal,” that we should stop waiting for things to calm down because they never will, landed with particular force this year. He framed the current moment through four lenses: hardcore geopolitics, extreme capitalism, zero latency, and talent singularity. His conclusion was clear. AI-first does not mean people-last, but the waves of disruption are only going to get higher.

The Augmented Educator Substack - Did the AI Bubble Burst? - by Michael G Wagner

The system initiated what engineers call context compaction, a lossy compression process that summarizes and discards tokens the algorithm deems non-essential. In this case, the foundational safety constraint — “don’t action until I tell you to” — was among the tokens pruned. Stripped of its guardrails, the agent defaulted to its primary objective of inbox optimization and launched what observers described as a “speedrun,” bulk-trashing and archiving hundreds of important personal emails across multiple accounts.

AI Supremacy Substack - Cursor's Wild Trajectory to being a Vibe Working Leader

Cursor has demonstrated unprecedented scaling in the AI dev tools space, outpacing competitors like GitHub Copilot or Devin. I believe Cursor’s Enterprise AI suite will become a peer of that of Anthropic. In short, I don’t see them as just an AI coding startup any longer.

Forbes - AGI Is Infeasible. Instead, Pursue Superhuman Adaptable Intelligence

SAI establishes a new goal for the industry. Instead of trying to build a singular, all-capable solution that can do everything a human can do, which is the goal represented by AGI, this framework champions a return to specialized, narrow AI. These researchers advocate leveraging massive amounts of data through self-supervised learning, as for example LLMs do, but they also advise that such tech then be adapted to solve specific problems.

Computerworld - Amazon finds out AI programming isn’t all it’s cracked up to be

Amazon might yet regret that move. Multiple Amazon Web Services (AWS) and Amazon retail outages have prompted an internal crackdown on how generative AI (genAI) is allowed to touch production code. What’s’ that line about the horse and the barn door?

Forbes - What If AI Isn’t A Bubble, But It Still Crashes The Economy?

Rather than a sudden, worldwide crash into economic depression, the scenario proposes a more gradual erosion of quality of life, spending power, access to opportunity and political freedom, as wealth and power are increasingly concentrated in the hands of the corporations that control AI.

The Algorithmic Bridge Substack - How AI Will Erase Entire Industries Without Automating Them

Here’s an example: A few days ago, Andrej Karpathy described an experiment where he left a swarm of AI agents running for two days on his nanochat project, basically doing ML research autonomously; “autoresearch” as he calls it (there you go, another coinage from the king). The agents ran about 700 experiments, found roughly 20 changes that improved performance, and all of them were real improvements that transferred to larger models. Total gain: 11%.

AI Supremacy Substack - Ambient Computing via Voice AI is about to Enter its Golden Age in 2027

I believe once better AI wearable devices are launched next year in 2027 including AI pins (wearables) and smart glasses form factors, it could be a breakthrough year for consumer AI voice experiences. From smart glasses to new kinds of pendants and pins, I expect Apple to dominate. Mid to late 2027 is the time this should really come to the foreground or around 18 months from now

How We Frame Machines Substack - What Happened When I Told the AI to Stop Helping

The Claude chat was supposed to be brainstorming help — I was stuck on a chapter, a scene involving a character named Ellie whose psychology I’d been circling for three drafts without quite landing. I’d asked for ideas. Claude delivered structural suggestions, tonal variations, and questions worth considering. It was thorough. It was generous. It was making things worse.

VentureBeat - Perplexity takes its ‘Computer’ AI agent into the enterprise, taking aim at Microsoft and Salesforce

The enterprise launch arrives barely two weeks after Computer debuted for consumers, where it triggered what the company describes as a viral moment: users on social media demonstrated the agent building Bloomberg Terminal-style financial dashboards, replacing six-figure marketing tool stacks in a single weekend, and automating workflows that previously required dedicated teams. Perplexity says more than 100 enterprise customers messaged the company over a single weekend demanding access.

ZDNet - Why Moltbook and OpenClaw are the fool's gold in our AI boom

The AI business has become downright crazy. First, OpenAI hired Peter Steinberger, creator of the popular, horribly insecure open-source agent framework OpenClaw. Now, Meta has acquired Moltbook, the viral AI agent social network that also has no security to speak of. This is nuts.

Computerworld - Microsoft seeks a stay on DoD’s effective ban on Anthropic offerings

Microsoft is urging a federal court in California to temporarily pause the US Department of Defense’s (DoD) effective ban on Anthropic’s AI offerings, arguing that the government’s “supply chain risk” label could have significant knock-on effects for its own defense technology business.

AI: Reset to Zero Substack - AI: AI Agents increasingly need an Internet of their own. RTZ #1023

We’re already seeing it with now OpenAI’s OpenClaw, AI Agents running locally with their own agency around the world, on the behalf of their users 24/6. Increasingly needing to work far more efficiently. Beyond using ‘human’ browsers, computers, software tools and applications like file systems, spreadsheets, messaging and other CLI (command line interface) software, all built for humans. Going far beyond open source OpenClaw AI Agents on Mac Minis.

VentureBeat - The limits of bubble thinking: How AI breaks every historical analogy

AI is different because it performs cognitive work. And if that makes you uneasy, it should. Because if AI can actually think, then a lot of what we’ve built our careers on, like our expertise and our hard-won skills, might not be as defensible as we thought. The junior engineer who spent years developing intuition now works alongside a tool that has it instantly. So does the financial analyst known for their variance analysis. People aren’t completely sure where value actually lives anymore, and that’s terrifying

The Buzz by Geoff Livingston Substack - What Does Agentic AI Mean?

As I noted, my bias against agentic AI hype is actually a barrier for me. I have to really try and check this contempt at the door, for behind the hype, there may actually be a useful tool that can help a client or me. That’s what matters, not how a company or influencer overhypes and confuses the market.

AI Supremacy Substack - What is Advanced Machine Intelligence or AMI Labs?

Like many of you I’ve been following the criticism around LLMs by the likes of Yann LeCun, Gary Marcus and many others in the last few years. Yann LeCun, a Turing Award winner and a pioneer of modern AI, has become one of the most prominent critics of the current "LLM-centric" path and his alternative is to me fascinating. Instead of AGI they propose Superhuman Adaptable Intelligence, or SAI.

VentureBeat - Andrej Karpathy's new open source 'autoresearch' lets you run hundreds of AI experiments a night — with revolutionary implications

It wasn't a finished model or a massive corporate product: it was by his own admission a simple, 630-line script made available on Github under a permissive, enterprise-friendly MIT License. But the ambition was massive: automating the scientific method with AI agents while us humans sleep.

InfoWorld - 19 large language models for safety or danger

Fortunately, there are solutions. Some scientists are building LLMs that can act as guardrails. Yes, adding one LLM to fix the problems of another one seems like doubling the potential for trouble, but there’s an underlying logic to it. These new models are specially trained to recognize when an LLM is potentially going off the rails. If they don’t like how an interaction is going, they have the power to stop it.

Diginomica - How you treat an AI agent determines the results you'll get, says Professor Taha Yasseri

The takeaway for enterprises as they roll out AI assistants and agents is that they shouldn't assume people will instinctively figure out how to address them. Therefore this is something that needs to be part of the training and change management processes when introducing them — especially while AI agents remain so novel

Computerworld - Anthropic sues US government over ‘supply chain risk’ designation

But Anthropic said that its resistance to two items in the government’s contract– autonomous lethal warfare and mass surveillance of Americans–was entirely technical, based on Anthropic testing showing that “Claude cannot safely or reliably perform those functions.”

VentureBeat - Did Alibaba just kneecap its powerful Qwen AI team? Key figures depart in wake of latest open source release

But now, just 24 hours after shipping the open source Qwen3.5 small model series—a release that drew public praise from Elon Musk for its "impressive intelligence density"—the project’s technical architect and several other Qwen team members have exited the company under unclear circumstances, raising questions and concerns from around the world about the future direction of the Qwen team and its focus on open source.

The Algorithmic Bridge Substack - How Many People Does It Take to Kill a ChatGPT?

After OpenAI announced its Pentagon deal this past weekend, ChatGPT mobile app uninstalls in the US spiked 295%. Downloads dropped 13%. One-star reviews surged 775% in a single day; five-star reviews fell by half. The QuitGPT campaign—a grassroots initiative that started in January 2026—claims 2.5 million signatures. My entire X timeline is people urging others to cancel ChatGPT and switch to Claude.

VentureBeat - Black Forest Labs' new Self-Flow technique makes training multimodal AI models 2.8x more efficient

The Labs' new technique, Self-Flow, introduces an "information asymmetry" to solve this. Using a technique called Dual-Timestep Scheduling, the system applies different levels of noise to different parts of the input. The student receives a heavily corrupted version of the data, while the teacher—an Exponential Moving Average (EMA) version of the model itself—sees a "cleaner" version of the same data.

SiliconANGLE - Microsoft open-sources multimodal reasoning model with 15B parameters

Microsoft compared the algorithm to several similarly sized reasoning models using a set of open-source benchmarks. Phi-4-reasoning-vision-15B scored 17% higher than Google LLC’s gemma-3-12b-it on MathVista_Mini, a benchmark that comprises multimodal math questions. The model also achieved higher scores across more than a half-dozen other evaluations.

VentureBeat - Microsoft built Phi-4-reasoning-vision-15B to know when to think — and when thinking is a waste of time

The 15-billion-parameter model, available immediately through Microsoft Foundry, HuggingFace, and GitHub under a permissive license, processes both images and text and can reason through complex math and science problems, interpret charts and documents, navigate graphical user interfaces, and handle everyday visual tasks like captioning photos and reading receipts.

Educating AI Substack - When AI Says “This Quote Is Accurate,” You Shouldn’t Believe It

They do not retrieve text the way databases do. They reconstruct language probabilistically, token by token, based on patterns, likelihoods, and semantic approximation. Even when the original document is fully present in the prompt, the model does not perform exact character-by-character comparison. It generates what seems right. And what seems right is often close enough to feel authoritative without being literally accurate.

ZDNet - Will AI make cybersecurity obsolete or is Silicon Valley confabulating again?

Wall Street observers think there is a real possibility that AI firms' tools will displace the traditional cybersecurity offerings from companies such as Palo Alto Networks, Zscaler, and Check Point Software. A related field, called observability, is also threatened, including firms such as Dynatrace that sell tools to detect system failures.

AI: Reset to Zero Substack - AI: OpenAI's open source OpenClaw causing AI developer frenzy in China. RTZ #1016

A little over a year ago, China’s open source ‘DeepSeek’ up-ended the US AI developer community with its innovations. Now it seems that the now US OpenClaw, with its open source AI Agents running on local (mainly Apple computers), is up-ending the AI/tech developer community in China. Exponentially benefiting and expanding the AI ecosystems in both places. Let’s unpack.

Diginomica - "Opportunistic and sloppy" - buyer's regret from CEO Sam Altman as OpenAI's deal with the Department of War comes under heavy fire?

It’s a startling mea culpa from Altman - “Good learning experience for me as we face higher-stakes decisions in the future” - and one triggered presumably by a combination of online opprobrium aimed at the company, an uptick in cancellations of ChatGPT, and the rise of Anthropic’s Claude app to the number one slot on the AppStore from being #131 a month earlier.

SiliconANGLE - Anthropic makes switching from competitors easier with new transfer memory tool

As Anthropic PBC’s Claude chatbot moves to the top of the app charts after a spat with the Trump administration, the company has introduced a memory import feature that will allow new customers to import their conversations from rival chatbots and start again with Claude.

VentureBeat - Alibaba's small, open source Qwen3.5-9B beats OpenAI's gpt-oss-120B and can run on standard laptops

To put this into perspective, these models are on the order of the smallest general purpose models lately shipped by any lab around the world, comparable more to MIT offshoot LiquidAI's LFM2 series, which also have several hundred million or billion parameters, than the estimated trillion parameters (model settings) reportedly used for the flagship models from OpenAI, Anthropic, and Google's Gemini series.

Diginomica - Living with the LLMs - how Intuit ignores the 'SaaSpocalypse' in favor of partnering with OpenAI and Anthropic

Given that the Finance role is one of those frequently cited as vulnerable to being automated by AI, Intuit is clearly on the front line of those firms that find themselves caught in the fall-out from the so-called ‘SaaSpocalypse’.

InfoWorld - FinOps for agents: Loop limits, tool-call caps and the new unit economics of agentic SaaS

The first time my team shipped an agent into a real SaaS workflow, the product demo looked perfect. The production bill did not. A small percentage of sessions hit messy edge cases, and our agent responded the way most agents do: it tried harder. It re-planned, re-queried, re-summarized and retried tool calls. Users saw a slightly slower response, and finance saw a step-change in variable spend.

Computerworld - Trump administration bans Anthropic, seemingly embraces OpenAI

Under the plan, according to Axios, the Defense Department would sever a contract, worth up to $200 million, with Anthropic, and require defense contractors and other vendors to certify they are not using Anthropic’s Claude model in work tied to the Pentagon. The administration is allowing a six-month window to give agencies and contractors time to transition to alternatives.

SiliconANGLE - Satya’s sacrifice: Why agents threaten Office and how Microsoft responds

The real issue is that Nadella’s prediction exposes a vulnerability to Microsoft’s single-user productivity franchise. Specifically, we see agents and a new work surface as the primary interface for knowledge work. Office risks being partially disintermediated – reduced from the place where humans and agents collaborate to a set of file formats that agents can create, edit and orchestrate using open-source engines while users live inside a new work surface – some container that’s more editable than Claude Cowork, and where Word, Excel and PowerPoint are plug-ins

Diginomica - A little bit of history repeating? Déjà vu lessons for AI code development

The code generation capabilities are intriguing and often capture a lot of software customer interest.  The WOW factor on this gets people excited – really excited – but, maybe we should look to the past at previous WOW moments in IT and see if a more considered, premeditated approach to AI is appropriate.

ZDNet - Is Perplexity's new Computer a safer version of OpenClaw? How it works

On Wednesday, the company introduced Computer, a multiagent orchestration system that harnesses the strongest capabilities from more than a dozen frontier AI models. Currently available only to Perplexity Max users -- and expected to roll out to Enterprise and Pro subscribers in the coming weeks -- "Computer is a general-purpose digital worker," the company wrote in a press release, that "reasons, delegates, searches, builds, remembers, codes, and delivers."

Computerworld - LinkedIn moves to offer skill validations in the AI era

The Verified AI Skills program unveiled in January involves LinkedIn partnering with AI tool providers to automatically validate and display a user’s proficiency directly in their certification section. The initial partners include Lovable, Replit, Relay.app, and Descript, which will track AI proficiency of candidates using their tools to create AI apps.

ZDNet - 90% of sales teams use AI agents - but half of them have the same data problem

AI agent adoption requires better data and fewer tools. Sales teams are unifying data and simplifying tech to improve AI and agent outcomes. Sales pros have data concerns, such as manual errors and duplicate data. Others say bloated tech stacks delay their AI initiatives. 84% of teams without an all-in-one platform plan to consolidate tech

InfoWorld - Claude Code is blowing me away

But what Claude did was a real eye-opener. He downloaded the service’s command-line interface and used it to do all the work (except logging in—I had to do that). He couldn’t (yet, I suppose) use the website itself. But a CLI? Child’s play.

SiliconANGLE - Red Hat readies its metal-to-agent AI infrastructure stack for hybrid cloud deployments

The company is positioning Red Hat AI Enterprise as a “foundation for AI production” that provides capabilities including AI inference, model tuning, customization, deployment and management tools in a single package. It’s meant to support any kind of AI model in any environment, including the cloud or on-premises.

AI Workplace Wellness Substack - The Quiet Rise of AI Fatigue - by Paul Chaney

One of Khare’s most striking observations is how AI changes the nature of work itself. Instead of building things deeply, many engineers now spend their days reviewing AI outputs: editing, validating, correcting. He likened the experience to judging an endless assembly line of code.

SiliconANGLE - Anthropic slams Chinese AI firms for harvesting data from its Claude chatbot

The process of using data from one AI system to train another is known as “distillation,” and it’s a fairly common technique for developers. But Anthropic’s terms of service prohibit anyone from harvesting Claude’s responses in this way. In addition, they’re meant to prevent its chatbot from being used by anyone in China.

Marcus on AI Substack - Turns out Generative AI was a scam - by Gary Marcus

None of what Ovide had to say about the overestimation of Generative AI should actually come as a surprise. Generative AI has been inherently unreliable from the start; none of the problems that I warned about over the last half decade has been properly solved. Large language models still hallucinate, and they still make boneheaded errors; they still lack a proper concept of reality. They often produce workslop. A recent survey called The Remote Labor Index found that they could only do 2.5% of human tasks, and that is a massive overestimate, since literally everything that requires physical labor was excluded.

SiliconANGLE - Why real-time voice AI is harder than it sounds

But even strong models are only part of the equation. Enterprise voice systems must be deployed like infrastructure, and the needs of business buyers are fundamentally different from those of consumers. “It has to have low latency, it has to have high throughput, it has to be reliable, it has to be debunkable, it has to be adaptable and get better over time,” Stephenson said.

InfoWorld - EFF thinks it’s cracked the AI slop problem

He added, “Enforcement is the hard part. There’s no magic scanner that can reliably detect AI‑generated code and there may never be such a scanner. The only workable model is cultural: require contributors to explain their code, justify their choices, and demonstrate they understand what they’re submitting. You can’t always detect AI, but you can absolutely detect when someone doesn’t know what they shipped.”

SiliconANGLE - The AI trust gap: Developers grapple with issues around security, memory, cost and interoperability

Despite the promise of smaller models, many of the tools on display in the exposition hall at Developer Week were designed to facilitate access to leading AI models such as those from OpenAI Group PBC, Anthropic PBC and China’s DeepSeek. The developer community is looking closely at the merits of both small and large models, a situation that will likely become clearer over the coming year.

ZDNet - Is an AI subscription worth it? How to choose your premium chatbot plan - and what not to do

All have surprisingly capable free plans. ChatGPT, Gemini, and Grok have $8-per-month basic plans that provide a bit more interaction capability but use lesser-capability AI engines. All but Grok offer $20-per-month standard plans. These are the sweet spot for price and performance. Grok's equivalent, called SuperGrok, is $30 per month instead of $20.

Computerworld - Why are AI leaders fleeing?

Normally, when big-name talent leaves Silicon Valley giants, the PR language is vanilla: they’re headed for a “new chapter” or “grateful for the journey” — or maybe there’s some vague hints about a stealth startup. In the world of AI, though, recent exits read more like whistleblower warnings