A landmark UK judgment in Getty Images v Stability AI has reduced uncertainty for organisations that build and use AI. The court rejected the idea that a diffusion model is itself a copy of its training images.
For most organisations operating in the UK, especially those using hosted AI services and modern model versions, the legal risk has become clearer – provided sensible guardrails are in place.
So, what are the key headlines?
- A diffusion model (like Stable Diffusion) is not a copy of the images used to train it. It learns patterns (eg, shapes, textures and styles) by practising how to remove ‘noise’ during training; it does not store the training images and it does not access them when generating new images.
- For organisations operating in the UK, the judgment narrows the issues in play for using image-generation models and providing access via hosted services. If downloadable model files are shared, this raises a distinct set of governance considerations (eg, access controls and usage terms).
- Location and delivery matter: the case proceeded on the basis that training occurred outside the UK, and using hosted access does not import a copy of the model into the UK.
- The judgment did not decide whether training on copyrighted material amounts to copyright infringement: that issue was not determined because the Training and Development claim was abandoned and the court proceeded on the basis that training occurred outside the UK.
The bottom line? With the right controls, many UK enterprises may feel more able to adopt and scale AI with greater confidence – particularly via hosted access – while recognising some questions remain open.
What the court decided in plain terms
1) Models aren’t ‘copies’ of their training data
The judge accepted technical evidence that diffusion models learn statistical patterns and don’t retain original training images. This means the model weights themselves aren’t infringing copies. This is a crucial building block for lawful development and enterprise use.
2) Where and how you access a model matters
Using AI through a hosted platform (for example, an API or web service) doesn’t mean an infringing copy is being imported or possessed in the UK. Handing out downloadable model files is a different activity and should be governed carefully – but it only matters if the distributed file (ie, the model weights/checkpoint) is a ‘copy’, which the court said a modern diffusion model isn’t.
3) Brand-safety risk is real but narrow
The court saw limited scenarios – mainly tied to older releases of the Stable Diffusion model and specific usage paths – where outputs could include another company’s watermark. That’s a brand-safety and content-quality issue and is already mitigated in newer models and by applying filters and guardrails.
4) No broader findings on passing off or additional damages
The court didn’t expand liability beyond the limited trademark issues, and it declined to award additional damages.
Why this matters for business leaders
Clarity encourages adoption: The decision provides UK-specific clarity that reduces perceived copyright exposure for building with, or buying, AI. For many organisations, this clarity may help remove a blocker to moving from pilots to scaled deployment.
Hosted-first is safer: Hosted access does not involve importing or possessing a copy of the model in the UK, and the court treated the model weights as not an infringing copy. Distributing model files raises separate governance considerations.
Modern image‑generation models (specifically Stable Diffusion) are safer by design: Later Stable Diffusion releases and enterprise platforms commonly include watermark filtering and other brand protections. Where updates are deployed centrally, hosted platforms can roll out new usage rules, content filters and version safeguards across users, which can make organisation-wide policy changes easier to apply and evidence.
Practical considerations from the judgment
- 1. Hosted access and downloads serve different roles: hosted access typically avoids importing or possessing a model copy in the UK; distributing model files introduces separate governance considerations (eg, access control and usage terms).
- 2. Brand‑safety measures are commonly available in newer Stable Diffusion releases and enterprise platforms (for example, watermark filtering and output scanning).
- 3. Model behaviour as described to the court: diffusion models learn patterns rather than storing training images, and the inference process does not access the original dataset.
- 4. Policies and procurement terms often shift after major judgments; readers may see vendor contracts and RFPs emphasise hosted access and content‑safety assurances.
- 5. Model versions can shift the risk profile: later Stable Diffusion releases have been associated with fewer unwanted artefacts (eg, watermarks).
Beyond images: what might carry over to wider generative AI
This case concerned image generation. However, depending on facts and circumstances, it may be possible to extract principles that apply to LLMs more generally. For example:
- Hosted vs downloaded: The treatment of remote, hosted access as not importing a copy into the UK is architecture‑led rather than image‑specific; similar reasoning may apply to hosted LLMs.
- Parameters, not training data: The ‘model ≠ copy’ point turned on evidence that Stable Diffusion does not store training images. Many generative systems also represent learning as parameters.
- Training not decided: The judgment did not determine the lawfulness of training on copyrighted material; that remains open across modalities and jurisdictions.
Looking ahead
This judgment may give organisations a clearer path to invest in AI with confidence. Other jurisdictions may take different approaches, and appeals are possible, but the direction of travel for the UK is positive: keep using modern, hosted models and move faster where AI delivers value.
How Endava helps
Endava helps clients design and run risk-aware, AI-native programmes. From architecture choices (hosted vs downloaded) to brand-safety filters and policy-as-code, we help you scale AI safely – without slowing delivery. If you’re ready to move from experimentation to enterprise-wide adoption, we can help you get there.
Disclaimer: This content is for general information only and does not constitute legal advice or create a solicitor–client relationship. Endava does not provide legal services. It reflects our understanding as of December 2025.
