Discussion about this post

User's avatar
Alexis Edwards's avatar

If AI use in planning remains unchecked for too much longer then the case to remove the ability to object to proposals will strengthen. Whilst there is a fairness issue with having no recourse to a development proposal lots of other planning systems operate without public consultation in quite the same way as in England.

Niall Cook's avatar

The problem isn’t AI, but how AI is getting used by people who think ChatGPT can help with everything. When an AI system is properly grounded in authoritative data with appropriate guardrails, it can make the work of appellants, agents, planners, consultees, lawyers more cost effective, efficient and/or better produced. But you can’t just rely on a general purpose tool like ChatGPT.

On the PINS guidance, whilst I applaud (and share) the desire for transparency and disclosure, I can’t help but feel that whoever wrote it has never actually used generative AI in their day-to-day work. If they had, they would quickly see how impractical (and, in places, impossible) some of these requirements are.

To give just one example, “Clearly label where you have used AI in the body” simply doesn’t work with most workflows. AI-assisted drafting is often a holistic process that takes place over time and over multiple sessions, not something that just edits a few paragraphs. The requirement for granular labelling just doesn’t make sense.

PINS would do better to look at how academia currently deals with this, and provide a simple AI use disclosure template to be submitted with each application (e.g. tools, models, tasks, declarations, etc.)

But one also has to ask “why?” So the inspector can ignore it? Inequitable. So they can validate any citations? Needs new tools.

AI-assisted objections and SoCs are here to stay. What’s needed now are better tools to detect and validate information in submissions, both for creators and reviewers.

No posts

Ready for more?