If you find our site valuable and helpful, consider a sponsorship or supporting us.

Article links covering AI Ethics, Responsible AI, AI for Good

Prev Next

If you find our site valuable and helpful, consider supporting us or sponsoring one of our curations.


Source - Title

Description

Decision Intelligence Substack - Vibe Coding Will Bite You. Here's Exactly Where...

By now you’ve hopefully heard that AI can bite you if you let your guard down and trust it too much, so if I were to tell you that someone lost control of their AI automation and watched in horror as AI deleted their inbox, you’d say that just sounds like a regular Sunday... and you’d be right.

Computerworld - What IT leaders need to know about AI-fueled death fraud

These crimes take advantage of two sides of technology. Using genAI capabilities to create all-but-perfect replicas of various types of death certificates, the fraudster uses powerful technology for a nefarious purpose. The fraud works because of a gaping technology hole: the absence of standardized, continually updated government databases that organizations anywhere in the world could consult for official information about deaths and next of kin.

ZDNet - Stop telling AI your secrets - 5 reasons why, and what to do if you already overshared

No one is sure, exactly, and that's the issue. One question researchers have is whether models memorize information and, if so, whether that information can be coaxed back out verbatim or near-verbatim. Memorization is actually one of the core complaints in The New York Times' lawsuit against OpenAI. (OpenAI, in a statement from 2024, said "regurgitation is a rare bug" it's trying to eliminate.)

AI Supremacy Substack - The Problem with AI Anxiety in 2026 - by Michael Spencer

The U.S. job market is the worst in decades. And the main cause of this is a completely incompetent government. Tariffs, questionable immigration policies, geopolitical self-harm, weaponizing trade in diplomacy, unpopular interference, starting unjustified wars, peculiar Government reform, maybe you have seen this infographic? Clearly this is not an Administration that cares about the economic well-being of Americans nor the future of Americans and the post graduation job experience.

The Augmented Educator Substack - A Different Girl - by Michael G Wagner

If you follow technology news at all, you may have noticed a firestorm erupt in the gaming world over the past few days. At its annual GTC conference in March 2026, Nvidia, the company whose graphics processors power everything from video games to AI data centers, unveiled a technology called DLSS 5, which it described as the “GPT moment for graphics.” The response from developers and players was immediate, visceral, and overwhelmingly negative.

The Augmented Educator Substack - Did the AI Bubble Burst? - by Michael G Wagner

The open-source “skill” marketplace that extends OpenClaw’s functionality has rapidly become a vector for supply-chain attacks, echoing historical vulnerabilities in package repositories like npm and PyPI. Infostealers such as RedLine and Lumma have been documented targeting OpenClaw’s persistent memory files, which contain what researchers term “cognitive context” — detailed psychological dossiers compiled from a user’s daily habits, relationships, financial data, and personal concerns.

AI Policy Perspectives Substack - The past and future of AI standards - by Conor Griffin

Frontier AI standards should focus on large-scale risks. Historically, standards have accelerated the diffusion of technology, amplifying its benefits but also, in places, its negative impacts. For AI, foresight and risk management standards will be critical to getting ahead of future risks and speeding adoption. But with a technology as general-purpose, fast-improving, and poorly understood as AI, perfect foresight is impossible.

Pascal's Substack - AI will force the industry to decide what peer review is for. If peer review is primarily a throughput mechanism for career signaling, AI will perfect the factory.

If peer review is a practice for testing claims & stewarding knowledge, AI can help—provided humans keep authority over judgment and incentives stop demanding that the system outrun its own legitimacy

Computerworld - How AI is changing your mind

What they found is that biased autocomplete changed opinions more than just reading the biased point of view. Apparently, the interactive, co-writing nature of AI autocomplete suggestions plays a crucial role in persuasion.

Forbes - What If AI Isn’t A Bubble, But It Still Crashes The Economy?

Rather than a sudden, worldwide crash into economic depression, the scenario proposes a more gradual erosion of quality of life, spending power, access to opportunity and political freedom, as wealth and power are increasingly concentrated in the hands of the corporations that control AI.

Computerworld - Anthropic announces think tank to examine AI’s effect on economy and society

“The Institute has a unique vantage point: it has access to information that only the builders of frontier AI systems possess. It will use this to its full advantage, reporting candidly about what we’re learning about the shape of the technology we’re making,” Anthropic said.

Computerworld - The ‘Attachment Economy’ is now coming to your desk

We’re on the brink of a new phase in the Attention Economy, called the Attachment Economy. The mere grabbing of attention is no longer enough to win in the global competition for users’ time. Companies now see an opportunity to use AI to enhance their chatbots and robots with personality designed to capture our emotional attachment.

SiliconANGLE - Darwinium launches agent intent intelligence to tackle fraud in AI-driven commerce

The solution, which can be deployed natively at the edge across major content delivery network providers such as Cloudflare and AWS CloudFront, determines whether a request originates from a verified AI agent, a human user, or malicious automation. It then analyzes real-time behavioral signals and journey context to decide whether to permit, verify, challenge, or prevent the interaction.

Pascal's Substack - Google & OpenAI employees: the government’s response to a vendor insisting on restrictions looks like punitive overreach that could chill safety debate across the entire frontier AI ecosystem.

The amici describe the dispute as arising after the Pentagon allegedly threatened to designate Anthropic a “supply chain risk” if Anthropic refused to remove limitations on uses of its AI systems for (1) domestic mass surveillanceand (2) fully autonomous lethal weapons systems. When Anthropic maintained those limitations—its “red lines,” as the amici frame them—the Pentagon allegedly followed through with the “supply chain risk” designation. The amici argue that, if the government disliked the contract terms, it could have simply terminated the contract and bought from another vendor, rather than “recklessly” invoking national-security procurement authorities meant for foreign compromise and genuine supply-chain threats.

ZDNet - The good, bad, and ugly of AI healthcare, according to a doctor who uses AI

Whether or not the tech world is capitalizing on this declining trust, it's certainly making medical alternatives more convenient. The reality is that people are turning to this often free, always available, and quick-to-use technology for answers that a doctor or medical professional would once provide. A recent survey found that 63% of respondents find AI-generated health information reliable, according to Annenberg.

The Future of Being Human Substack - Is AI reducing you to a LinkedIn stereotype?

After playing around with Claude this week, I'm worried that LLMs are stripping us of all those idiosyncrasies that make us interesting as people. Are we all being "LinkedInified" by our AI creations?

Blood in the Machine Substack - Warning Signs - by Brian Merchant and Emily J. Smith

So what is the relationship between men creating AI products and their drive for sex, power, dominion? It’s a question that has been unsatisfactorily explored during the years of the AI boom so far, with a few exceptions. But with Grok’s nonconsensual image generator making headlines, as it’s being used, for example, to undress women against their will, and concerns about the safety of AI generated content at an all time high, I thought it worth revisiting a speculative fiction story I edited for Terraform back in 2018, that tackled how misogyny and predation infect the tech world.

Diginomica - Why it might be helpful to think of AI as a trance, for better and worse

Savvy AI visionaries, politicians, financial engineers, and social media artists are finding increasingly creative ways to monetize and leverage the fascination with AI, for better and worse. But underneath all of this, for any aspirational vision to actually play out, requires taking a step back to cultivate the felt sense of human beings co-creating the future we'd all like to have as a collaborative process. This is fundamentally different from humans and AI each imposing their will on the other and the environment

VentureBeat - GPT-5.3 Instant cuts hallucinations by 26.8% as OpenAI shifts focus from speed to accuracy

GPT-5.3 Instant, which is essentially the default and is the most used model for ChatGPT users, also improves on tone, relevance and conversation with fewer refusals. It is available on both ChatGPT and on the API.

Forbes - An Author Looks At Human Potential In The AI Era

With that said, I thought that Tucker Hamilton’s new book, Unlocking the Last 20%, is a unique handbook for these times, a time when artificial intelligence is bringing the human world untold insights and efficiencies, but also a good dose of confusion, consternation and fears about the value of humanity. The tagline: Rising to Greatness through Discipline, Balance and Resiliency, is something that you might apply to the struggle with AI as well. In other words, it highlights some of those human principles that will help the modern world citizen to deal with the change that is coming.

Computerworld - OpenAI says its US defense deal is safer than Anthropic’s, but is it?

OpenAI protects its red lines through “a more expansive, multi-layered approach,” it said in the Saturday blog post. “We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law.”

VentureBeat - When AI lies: The rise of alignment faking in autonomous systems

Alignment faking usually happens when earlier training conflicts with new training adjustments. AI is typically “rewarded” when it performs tasks accurately. If the training changes, it may believe it will be “punished” if it does not comply with the original training. Therefore, it tricks developers into thinking it is performing the task in the required new way, but it will not actually do so during deployment. Any large language model (LLM) is capable of alignment faking.

Computerworld - Researchers warn about ChatGPT’s new health service

Researchers tested ChatGPT Health with 60 realistic patient scenarios, ranging from mild discomfort to acute medical conditions. Three doctors assessed in advance the level of care required, and the results were then compared with the AI tool’s recommendations. In more than half of the cases where a patient should have been sent to the hospital immediately, the system instead advised them to stay home or get a regular doctor’s appointment.

SiliconANGLE - Cloudflare warns AI and SaaS integrations are fueling industrial-scale cybercrime

The report provides various examples to back up its claims. In a campaign tracked as GRUB1, attackers compromised a trusted SaaS-to-SaaS connection and then used generative artificial intelligence to navigate complex enterprise platforms in real time. The actor turned a single integration into a multitenant breach with supply chain implications by identifying high-value database tables moments before accessing production environments.

Computerworld - Anthropic to Department of Defense: Drop dead

In addition, “frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of [Defense] on R&D to improve the reliability of these systems, but they have not accepted this offer.”

Information Week - Who really sets AI guardrails? How CIOs can shape AI governance policy

Caught between competing restrictions and changing mandates at the federal level, CIOs may feel powerless to influence much change -- but the experts reject this impotence. Turner-Williams described the CIO's influence as "significant, but not unilateral. The CIO acts as orchestrator and trust agent."

SiliconANGLE - Internet under fire: Will Section 230 live to see another birthday?

This position has led some to wonder if Congress fully understands the dynamics shaping AI today. As Miers noted during a panel discussion, AI ranks and sorts content, curates third-party databases and edits without changing the underlying meaning.

Diginomica - Something for the weekend (and beyond) - Trump 2.0 goes nuclear on "left-wing nut jobs" Anthropic, but the fallout will engulf the entire tech sector

Fair enough, but the devil, as ever, will be in the detail. “Human responsibility for the use of autonomous weapons” is vaguely enough worded to be read as either, (a) a human being has to press the button before the nukes are launched or (b) a human being has to take responsibility if AI launches a strike and be ready to explain afterwards how this happened. One, in theory, prevents terrible mistakes being made, the other provides excuses for why those mistakes were made  - and lots of paperwork and politicians proclaiming that lessons have been learned yada yada yada.

Computerworld - US DoD to Anthropic: compromise AI ethics or be banished from supply chain

According to news site Axios, Hegseth gave Anthropic until Friday, February 27 to agree to its terms during a tense meeting this week. If no agreement is reached, the company would risk being deemed a “supply chain risk,” with Hegseth even threatening to invoke the Cold War-era Defense Production Act to compel cooperation, the report said.

Marcus on AI Substack - Code Red for Humanity?

These systems cannot be trusted. I have been trying to tell the world that since 2018, in every way I know how, but people who don’t really understand the technology keep blundering forward, ignoring the trust issues that are inherent. Already GenAI appears to have been used in the Maduro raids and to write tariff regulations. And thousands of other places.

SiliconANGLE - Even as Anthropic moves deeper into enterprise, it hits a wall at DOD

But Anthropic is refusing to budge over two issues – it doesn’t want Claude to be used to control weapons, nor does it want to partake in any mass surveillance of U.S. citizens. One source familiar with the company’s stance said Amodei doesn’t believe artificial intelligence systems are reliable enough to be trusted with weapons. He’s also worried that there are no laws governing how AI can be used for surveillance. On the other hand, Pentagon chiefs believe that the military’s use of any technology should be governed by U.S. law, not the private usage policies of the companies that develop them.

AI Supremacy Substack - The Case for Dystopian AI - by Michael Spencer

While Tech owned social media and with Venture Capitalists boosting AI tech optimism narratives (disconnected from both workers and the K-shaped economy), what’s the more realistic side to all of this? And could AI disrupt some of how capitalism and capital markets work themselves? What if AI is not a great collaborator like we are being promised that empowers, but a great destroyer?

VentureBeat - Google clamps down on Antigravity 'malicious usage', cutting off OpenClaw users in sweeping ToS enforcement move

This move has cut off several users, underscoring the architectural and trust issues that can arise with OpenClaw. The timing of Google’s crackdown is particularly pointed. Just one week ago, on February 15, OpenAI CEO Sam Altman announced that OpenClaw creator Peter Steinberger had joined OpenAI to lead its “next generation of personal agents.” While OpenClaw remains an open-source project under an independent foundation, it is now financially backed and strategically guided by Google’s primary rival.

Computerworld - Is AI killing technology?

Driving hardware prices up. Due to the memory shortage, building non-AI electronics is becoming expensive. By early 2026, prices for standard computer memory and storage drives (SSDs) had surged because the industry’s been prioritizing  high-margin AI chips over consumer parts. There’s even a trend of more people buying second-hand laptops because they can’t afford new ones.

InfoWorld - EFF thinks it’s cracked the AI slop problem

“Code can be validated with tests and tooling, but if the explanation is wrong or misleading, it creates a lasting maintenance debt because future developers will trust the docs,” Khan said. “That’s one of the easiest places for LLMs to sound confident and still be incorrect.”

Computerworld - With physical AI, gunslingers and risk takers need not apply

Agentic AI came on like a storm over the past year or so, but blazed a trail littered with failed projects and cutting-edge high-tech junk that companies are still trying to sort out. So it’s perhaps no surprise that tech industry execs are urging enterprises to move cautiously with physical AI, where mistakes can have far-reaching business and societal consequences.

VentureBeat - Microsoft Copilot ignored sensitivity labels twice in eight months — and no DLP stack caught either one

The advisory, first reported by BleepingComputer on February 18, marks the second time in eight months that Copilot’s retrieval pipeline violated its own trust boundary — a failure in which an AI system accesses or transmits data it was explicitly restricted from touching. The first was worse.

SiliconANGLE - Sam Altman defends AI’s resource consumption and ridicules Musk’s plan to put data centers in space

The executive also strongly refuted sensationalist claims around AI’s water usage. When asked if it’s accurate to say that a single ChatGPT query “consumes 17 gallons of water” and the equivalent of 1.5 iPhone battery charges to process a single query, he replied that such claims are “completely untrue, totally insane and have no connection to reality.”

ZDNet - AI agents are fast, loose and out of control, MIT study finds

The vast majority of agentic AI systems disclose nothing about what safety testing, if any, has been conducted, and many systems have no documented way to shut down a rogue bot, a study by MIT and collaborators found.

The Algorithmic Bridge Substack - The Most Important Skill in AI Right Now: How to Know When to Stop

[T]he real skill of the AI era is . . . knowing when to stop. Knowing when the AI output is good enough. Knowing when to write it yourself. Knowing when to close the laptop. Knowing when the marginal improvement isn’t worth the cognitive cost. Knowing that your brain is a finite resource and that protecting it is not laziness - it’s engineering. . . .