A practical framework for creative agencies using generative AI

Creative agencies have always been early adopters. Digital production, social platforms, automation — new tools show up, get absorbed, and eventually become invisible.
Generative AI is different. Not because it’s magic, but because it blurs lines that used to matter. Authorship. Responsibility. Trust. The question isn’t just what can be made anymore. It’s who’s accountable for it once humans and machines are working together.
The Higgins-Berger Scale exists to deal with that.
It’s not a moral verdict on AI, and it’s not a list of rules meant to slow anyone down. It’s a practical framework for evaluating how generative AI is actually being used in creative, informational, and commercial work — and for making those choices visible, defensible, and intentional.
Ethics here isn’t philosophy. It’s a design constraint.
Practice, not theory
Most AI ethics conversations live at the extremes. Either they’re abstract principles that collapse under real deadlines, or rigid rules that ignore how creative work actually happens.
This scale is meant to be used inside real workflows.
It looks at outcomes and processes, not press releases or stated intentions. The question it asks is simple:
Given how AI is being used in this specific project, what ethical risks are being introduced — and how are they being handled?
To answer that, the scale focuses on five areas where generative AI consistently changes the ethical landscape:
- Transparency
- Potential for harm
- Data usage and privacy
- Displacement impact
- Intent
Each category is scored based on observable behavior. Lower scores reflect stronger alignment. Higher scores signal the need for mitigation, redesign, or restraint.
Perfection isn’t required. Judgment is.
Transparency means accuracy, not disclosure theater
Transparency doesn’t mean listing every tool in the stack. It means not misleading people.
Claiming purely human authorship when AI played a meaningful role undermines trust — especially in contexts where audiences expect craftsmanship, originality, or accountability. Journalism. Education. Political messaging. Explicitly handcrafted work.
As AI becomes standard, many audiences already assume some machine assistance. Transparency matters most when omission would mislead. In those cases, clarity isn’t performative. It’s corrective.
Harm lives in context
AI doesn’t create harm in a vacuum. Harm comes from context, distribution, and interpretation.
The scale looks at whether AI output could reasonably mislead, reinforce bias, damage reputations, or create foreseeable downstream consequences once it’s released.
The goal isn’t zero risk. It’s examined risk. Lower scores reflect work where safeguards exist and human review actually matters. Higher scores reflect unexamined assumptions or indifference to how the work might land.
The presence of a machine doesn’t change the responsibility.
Data responsibility doesn’t disappear
Agencies may not control how large models are trained, but they’re still responsible for what they feed into them and how outputs are used.
Sensitive inputs. Questionable datasets. Ignored licensing. None of that becomes acceptable because it’s automated or convenient.
Unclear data provenance isn’t a loophole. It’s a warning sign.
Augmentation beats erasure
Displacement alone isn’t unethical. Creative work has always changed with new tools.
Risk increases when automation quietly replaces human judgment while preserving the appearance of human authorship.
The scale distinguishes between AI used to augment creative work and AI used to substitute for it. Projects that treat AI as a collaborator score very differently from those that remove people from the process without saying so.
Trust is built not just on what gets delivered, but on who is still responsible for it.
Intent ties it together
Across all five categories, intent is the connective tissue.
Commercial goals aren’t the problem. Risk escalates when speed, novelty, or engagement are prioritized over transparency, consent, or harm mitigation.
Most ethical failures don’t come from malice. They come from disengagement — from quietly removing human responsibility because the system makes it easy.
The point isn’t the score
Projects land in ethical zones ranging from exemplary to unacceptable. These aren’t judgments of creativity or innovation. They’re signals of risk and oversight.
A low score isn’t moral permission. A high score isn’t an accusation. The value of the scale is that it forces earlier conversations — before shortcuts become habits and habits become liabilities.
Ethical use of generative AI doesn’t require abstinence. It requires intention, awareness, and accountability.
The Higgins-Berger Scale isn’t meant to be static. It’s meant to evolve. Its purpose isn’t to produce a number — it’s to keep human responsibility visible wherever machines are invited into the creative process.
















