How AI is slowing down the planning system and what we can do about it
AI is making it easier than ever to object to planning applications, but the resulting flood of low-quality material risks slowing decisions and obscuring genuine local concerns.
Clogging up the planning system hasn’t featured in the numerous predictions about the dystopian future artificial intelligence might have in store for us, but perhaps it should have.
Specialist websites will already prepare an AI-generated objection letter for a modest fee. Local residents are exchanging tips in Facebook groups on how best to craft prompts for AI to produce objections. The impact of that behaviour is already apparent.
One application for new homes in the Midlands has been the subject of more than 500 objection emails from a single local resident, all seemingly AI-generated. They included a 160-page “report” - littered with errors, hallucinations and irrelevant information - critiquing the applicant’s 35-page transport statement.
Elsewhere, an objector to an application for 160 homes inadvertently submitted their entire conversation with ChatGPT to the council rather than just the final objection letter. It was a worrying insight into how the technology can be used. After producing an initial draft letter setting out their concerns and helping link them to planning policy, the individual asked “is there anything else you could add”. Ever helpful, ChatGPT went on to list potential further objections relating to flood risk, heritage, community identity and the cumulative impact of development, none of which the local resident had seemingly been concerned about up until that point. Download it below and have a read for yourself.
AI is being used in the appeal process too. In the costs decision for a recent appeal in Wiltshire the Inspector noted that one third party objector had probably used AI in producing their evidence without declaring they had done so.
“It is not just the detail, scope and lengthy citation of appeal decisions and case law that raises suspicion,” the Inspector said, “but also the unusual layout and phraseology. It is very different to anything I have encountered before. No author other than PK is identified, who as far as I am aware, is not a planning professional. One is therefore inevitably drawn to ask how PK could have produced such a document which would have been a significant undertaking even for the most experienced planning consultant.”
He went on to note that: “I have serious concerns that the [Statement of Case] was produced using AI, something which undeclared, would in my view amount to unreasonable behaviour.”
The rationale for consulting the public on planning applications is that there might be things local communities know about an area that the applicant or case officer does not. That has clear merit, and nobody would want to stop it happening. The benefits are less obvious if we are actually canvassing the opinions of a computer server in Slough.
The ease with which AI allows objections to be submitted increases their number and length too. That doesn’t make a difference to the way planning officers deal with the comments - whether an issue is raised once or dozens of times, it is given the same consideration based solely on its planning merits (a fact national policy could perhaps make clearer). But it does increase the workload for planning teams and makes it harder for officers to identify what people are genuinely worried about.
Banning the use of AI isn’t the answer. As university professors will attest, detecting the use of AI is difficult. There is a difference, too, between using AI as a tool to tidy up your own thoughts - which few would object to - and asking it what you should think in the first place.
In the longer-term the solution is probably to fundamentally rethink the way we consult on planning applications. Rather than a self-selecting group of individuals being able to submit freeform responses, interactive consultation tools asking structured questions about critical matters such as highways, design and infrastructure improvements could help identify real issues. Polling of a demographically representative group of local residents could produce a more accurate and useful understanding of public opinion.
We do, though, need a short-term fix before already over-stretched planning officers are swamped in AI-generated responses and the planning system grinds to a halt.
Central to the problem is the low effort required to produce an AI objection and the lack of consequences for it being junk. Planning officers can’t simply ignore AI-generated comments. They have to be treated on the same basis as old fashioned human-produced ones. Even if the submissions are wrong or there is no planning merit to the points being made, they still need reading and recording. At the very least that means applications take longer to process, at a time when the system is already too slow.
The Planning Inspectorate (PINS) might already have part of the answer in their existing guidance called “Use of artificial intelligence in casework evidence.” This seeks to introduce accountability for the information submitted without banning AI outright. It states that:
If you use AI to create or alter any part of your documents, information or data, you should tell us that you have done this when you provide the material to us. You should also tell us what systems or tools you have used, the source of the information that the AI system has based its content on, and what information or material the AI has been used to create or alter.
In addition, if you have used AI, you should do the following:
Clearly label where you have used AI in the body of the content that AI has created or altered, and clearly state that AI has been used in that content in any references to it elsewhere in your documentation.
Tell us whether any images or video of people, property, objects or places have been created or altered using AI.
Tell us whether any images or video using AI has changed, augmented, or removed parts of the original image or video, and identify which parts of the image or video has been changed (such as adding or removing buildings or infrastructure within an image).
Tell us the date that you used the AI.
Declare your responsibility for the factual accuracy of the content.
Declare your use of AI is responsible and lawful.
Declare that you have appropriate permissions to disclose and share any personal information and that its use complies with data protection and copyright legislation.
By following this guidance, you will help us, our Inspectors, and other people involved in the appeal, application or examination to understand the origin, purpose, and accuracy of the information. This will help everyone to interpret it and understand it properly.
This ensures people take some responsibility for the comments they submit. If it makes them think twice about their comments, and means they at least read AI-generated content critically, that can only be a good thing.
The PINS guidance has been in place since September 2024 so is already tried and tested. A quick, simple fix would be for the government to replicate that guidance across the application process as a whole, buying time to think about a better long-term answer.
This could be supplemented by other PINS practices that already work well for planning appeals and local plan examinations. Word limits could be imposed for third party submissions. Consultation periods could be strictly time-limited, rather than responses being accepted at any time, as is the current practice.
None of that disenfranchises local communities or downplays their views. It simply helps ensure we’re finding out what they actually think.
Technology represents a huge opportunity to improve our planning system in so many ways. Drowning it in lengthy, poor quality documents isn’t one of them.


If AI use in planning remains unchecked for too much longer then the case to remove the ability to object to proposals will strengthen. Whilst there is a fairness issue with having no recourse to a development proposal lots of other planning systems operate without public consultation in quite the same way as in England.
The problem isn’t AI, but how AI is getting used by people who think ChatGPT can help with everything. When an AI system is properly grounded in authoritative data with appropriate guardrails, it can make the work of appellants, agents, planners, consultees, lawyers more cost effective, efficient and/or better produced. But you can’t just rely on a general purpose tool like ChatGPT.
On the PINS guidance, whilst I applaud (and share) the desire for transparency and disclosure, I can’t help but feel that whoever wrote it has never actually used generative AI in their day-to-day work. If they had, they would quickly see how impractical (and, in places, impossible) some of these requirements are.
To give just one example, “Clearly label where you have used AI in the body” simply doesn’t work with most workflows. AI-assisted drafting is often a holistic process that takes place over time and over multiple sessions, not something that just edits a few paragraphs. The requirement for granular labelling just doesn’t make sense.
PINS would do better to look at how academia currently deals with this, and provide a simple AI use disclosure template to be submitted with each application (e.g. tools, models, tasks, declarations, etc.)
But one also has to ask “why?” So the inspector can ignore it? Inequitable. So they can validate any citations? Needs new tools.
AI-assisted objections and SoCs are here to stay. What’s needed now are better tools to detect and validate information in submissions, both for creators and reviewers.