Introducing the Higgins-Berger Scale of AI Ethics

A practical framework for creative agencies using generative AI

Creative agencies have always been early adopters. Digital production, social platforms, automation — new tools show up, get absorbed, and eventually become invisible.

Generative AI is different. Not because it’s magic, but because it blurs lines that used to matter. Authorship. Responsibility. Trust. The question isn’t just what can be made anymore. It’s who’s accountable for it once humans and machines are working together.

The Higgins-Berger Scale exists to deal with that.

It’s not a moral verdict on AI, and it’s not a list of rules meant to slow anyone down. It’s a practical framework for evaluating how generative AI is actually being used in creative, informational, and commercial work — and for making those choices visible, defensible, and intentional.

Ethics here isn’t philosophy. It’s a design constraint.

Practice, not theory

Most AI ethics conversations live at the extremes. Either they’re abstract principles that collapse under real deadlines, or rigid rules that ignore how creative work actually happens.

This scale is meant to be used inside real workflows.

It looks at outcomes and processes, not press releases or stated intentions. The question it asks is simple:

Given how AI is being used in this specific project, what ethical risks are being introduced — and how are they being handled?

To answer that, the scale focuses on five areas where generative AI consistently changes the ethical landscape:

  • Transparency
  • Potential for harm
  • Data usage and privacy
  • Displacement impact
  • Intent

Each category is scored based on observable behavior. Lower scores reflect stronger alignment. Higher scores signal the need for mitigation, redesign, or restraint.

Perfection isn’t required. Judgment is.

Transparency means accuracy, not disclosure theater

Transparency doesn’t mean listing every tool in the stack. It means not misleading people.

Claiming purely human authorship when AI played a meaningful role undermines trust — especially in contexts where audiences expect craftsmanship, originality, or accountability. Journalism. Education. Political messaging. Explicitly handcrafted work.

As AI becomes standard, many audiences already assume some machine assistance. Transparency matters most when omission would mislead. In those cases, clarity isn’t performative. It’s corrective.

Harm lives in context

AI doesn’t create harm in a vacuum. Harm comes from context, distribution, and interpretation.

The scale looks at whether AI output could reasonably mislead, reinforce bias, damage reputations, or create foreseeable downstream consequences once it’s released.

The goal isn’t zero risk. It’s examined risk. Lower scores reflect work where safeguards exist and human review actually matters. Higher scores reflect unexamined assumptions or indifference to how the work might land.

The presence of a machine doesn’t change the responsibility.

Data responsibility doesn’t disappear

Agencies may not control how large models are trained, but they’re still responsible for what they feed into them and how outputs are used.

Sensitive inputs. Questionable datasets. Ignored licensing. None of that becomes acceptable because it’s automated or convenient.

Unclear data provenance isn’t a loophole. It’s a warning sign.

Augmentation beats erasure

Displacement alone isn’t unethical. Creative work has always changed with new tools.

Risk increases when automation quietly replaces human judgment while preserving the appearance of human authorship.

The scale distinguishes between AI used to augment creative work and AI used to substitute for it. Projects that treat AI as a collaborator score very differently from those that remove people from the process without saying so.

Trust is built not just on what gets delivered, but on who is still responsible for it.

Intent ties it together

Across all five categories, intent is the connective tissue.

Commercial goals aren’t the problem. Risk escalates when speed, novelty, or engagement are prioritized over transparency, consent, or harm mitigation.

Most ethical failures don’t come from malice. They come from disengagement — from quietly removing human responsibility because the system makes it easy.

The point isn’t the score

Projects land in ethical zones ranging from exemplary to unacceptable. These aren’t judgments of creativity or innovation. They’re signals of risk and oversight.

A low score isn’t moral permission. A high score isn’t an accusation. The value of the scale is that it forces earlier conversations — before shortcuts become habits and habits become liabilities.

Ethical use of generative AI doesn’t require abstinence. It requires intention, awareness, and accountability.

The Higgins-Berger Scale isn’t meant to be static. It’s meant to evolve. Its purpose isn’t to produce a number — it’s to keep human responsibility visible wherever machines are invited into the creative process.

Review the latest version of the Higgins‑Berger Scale (Version 2.5)

Or test your process using the HBS Interactive Utility

How 2025 Killed the AI Hype — and Why 2026 Will Liquidate the Middlemen

The 2025 Ethical Graveyard and the 2026 Agentic Squeeze


Silicon Valley has always treated ethics as a trailing indicator — a cleanup crew for the mess left behind by “disruption.” Move fast and break things worked when the things being broken were curated playlists, cluttered inboxes, or taxi medallions. But in 2025, the failures were fundamentally different. We didn’t just break apps; we broke trust contracts.

The collapse we witnessed over the last twelve months wasn’t a standard tech-cycle crash. It was a pruning of the vine. A set of foundational assumptions that fueled the 2023–2024 boom simply dissolved:

Thanks for reading! Subscribe for free to receive new posts and support my work.Subscribe

  • The belief that opacity could scale indefinitely.
  • The hope that reliability was an optional “version 2.0” feature.
  • The delusion that users would tolerate permanent dependency on black-box systems.

The companies that didn’t make it to 2026 didn’t just run out of runway; they ran out of legitimacy. In the new AI economy, once legitimacy evaporates, no amount of compute or branding can bring it back.

The Ethical Graveyard (2025)

Transparency: When “Magic” Was Just Deception (Builder.ai)

The most clinical implosion of the year belonged to Builder.ai. On paper, it was the dream of the no-code era: automated software construction powered by an omniscient AI. In practice, audits and investigations revealed a routing layer that funneled tasks to a concealed offshore human workforce.

This crossed the line from prototyping into product misrepresentation. Whatever the original intent, what customers purchased as automation resolved into labor — priced, marketed, and valued as software. The market reaction wasn’t driven by moral outrage, but by brutal math. Enterprises realized they weren’t buying scalable automation; they were buying labor arbitrage disguised by a high-margin software multiple.

The Lesson: In AI, transparency isn’t a virtue; it’s a technical specification. If you sell automation and deliver humans-in-a-trench-coat, you aren’t a platform — you’re an accounting liability waiting to happen.

Reliability: The Danger of Beta-Testing Humanity (Humane AI Pin)

If 2024 was the year of the AI wearable, 2025 was the year reality intervened. The high-profile failure of the Humane AI Pin (and its contemporaries) wasn’t due to bad industrial design. It failed because it misunderstood the ethical load of the interface it sought to replace.

A smartphone is a tool you can put in your pocket. A wearable agent mediating your navigation, communication, and social context is life-adjacent infrastructure. Humane didn’t fail because it shipped early — it failed because it treated probabilistic output as deterministic authority. Thermal shutdowns during critical moments, hallucinated directions in unfamiliar cities, and inconsistent voice triggers weren’t just bugs — they were violations of an unwritten rule: do not increase the user’s cognitive risk.

The Lesson: You cannot outsource cognitive load to an unreliable narrator. Ethics enters the chat not as philosophy, but as uptime. If the system isn’t 99.99% reliable, “hands-free convenience” becomes anxiety, not liberation.

Data Sovereignty: The Un-Smartening of the Home (iRobot)

The quietest failure of 2025 wasn’t a bankruptcy — it was a trust withdrawal. iRobot, once the gold standard of the smart home, attempted to offset hardware margin pressure by monetizing spatial data.

Consumers didn’t stage a protest; they simply looked for the exit. Privacy stopped being an abstract concern for digital-rights activists and became a functional product requirement. Local-first stopped being a niche hobbyist term and became a mark of durability. Devices that required constant cloud mediation for basic operation began to feel fragile, risky, and — eventually — obsolete.

The Lesson: Privacy is no longer a policy layer; it’s a feature. When a device must export the geometry of your living room to function, users no longer see intelligence. They see exposure.

The Infrastructure Wake-Up Call

What made these failures stick was the backdrop of a shaking foundation. In 2025, we saw the infrastructure blink.

Outages across Google Ads and Cloudflare didn’t take the internet down, which is exactly why they were so unsettling. When Google Ads paused, thousands of businesses discovered they didn’t have a marketing strategy — they had a revenue pipe they didn’t own. When Cloudflare flickered, it revealed that the “distributed” internet is logically centralized around a few high-leverage choke points, where a single API hiccup can zero out a day’s revenue.

The Lesson: Resilience isn’t about uptime percentages; it’s about blast radius. This is where ethics and infrastructure converge. Hidden dependencies are trust failures waiting to happen.

The Agentic Squeeze (2026)

If 2025 cleared the deadwood, 2026 will liquidate the middlemen. We are entering the era of the Agentic Squeeze, where the distance between intent and execution collapses toward zero.

The Legal Squeeze: Perplexity and Attribution Debt

Perplexity AI sits on a growing pile of attribution debt. Its value proposition — collapsing diverse sources into a single, polished answer — conflicts directly with the economic reality of the publishers it depends on.

In 2026, we won’t see Perplexity disappear. We’ll see compression. Licensing requirements and legal guardrails will push margins toward utility pricing. The product survives, but the venture-scale upside evaporates. It becomes infrastructure — valuable, necessary, and boring.

The Feature Squeeze: Character.AI and the Generalists

Character.AI faces a different pressure. Its primary competitor isn’t another startup — it’s the evolution of general-purpose models. Once GPT-5 and Gemini deliver native long-term memory, persona persistence, and emotional tone control, companionship ceases to be a category. It becomes a setting.

This is the category-to-setting collapse: when a standalone product degrades into a checkbox inside a general system. Expect 2026 to be the year persona apps are quietly absorbed by the giants.

The Wrapper Purge

The most ruthless phase of 2026 will be the Wrapper Purge. AI abstraction is moving directly into the operating system — macOS, iOS, Windows, Android.

Any product that exists solely to do one thing an agent can invoke natively is in terminal danger:

  • Standalone PDF assistants: now a native right-click.
  • Basic AI copywriting tools: embedded in every text field.
  • Single-workflow summarizers: handled by the notification layer.

Unless a company owns proprietary data or deep, specialized workflow context, it will be replaced by the Insert AI button.

The Insert AI button doesn’t compete.
It eliminates.

The Sovereignty Pivot

The winners of 2026 won’t be anti-cloud, but they will be anti-opacity. The market is shifting toward Hybrid Sovereignty:

  • Identity and data stay local; compute scales to the cloud only when necessary.
  • Verifiable agents with inspectable reasoning — no more “trust me, I’m an AI.”
  • Graceful degradation: systems that still work locally when the servers go dark.

Final Take

2025 didn’t kill AI hype; it killed the illusion that abstraction equals safety. Ethical shortcuts surfaced as operational failures. Infrastructure hiccups surfaced as trust failures. Trust failures surfaced as valuation collapses.

As we move into 2026, the rule of the road is simple:

If your product exists only to stand between the user and execution, you are already obsolete.

Nothing exploded.
Everything simply re-priced.

The 2026 Outlook

  • Reliability is the new ethics.
  • Privacy is the new utility.