AI, digitalisation and the UK immigration system
Laura Devine
Laura Devine Immigration, London
laura.devine@lauradevine.com
Introduction
In recent years the UK government has pushed ahead with the large-scale digitalisation of immigration and border services. Artificial intelligence (AI)-driven tools, while still marginal, are increasingly becoming part of this programme. The stated aims of this automation have included expediting low-complexity transactions, automating identity and document checks, and freeing civil servants to focus on legally complex matters. However, civil society groups and lawyers have grown increasingly vocal about transparency, fairness and the lawfulness of automated tools.
At the same time, AI has begun to play a larger role in the provision of legal advice. Legal practitioners are beginning to integrate the use of AI within their practices, with the potential to make legal services more efficient and therefore cheaper for applicants in the long term.
While AI tools remain limited in their use and scope within immigration administration and legal advice provision, they have the potential to alter the immigration landscape radically in the future.
Electronic Travel Authorisation (ETA) Scheme: 2025 position
One area of border administration highlighting the increasing use of automation is the UK’s Electronic Travel Authorisation (ETA) scheme.
The ETA scheme is now an established part of the UK’s immigration system. It requires non-visa nationals (who could previously visit the UK without a prior immigration application) to submit a short application form to be permitted to travel to the UK.
ETA processing is largely automated at the first-line checks stage (identity, criminality and basic eligibility) with refused applications being reviewed by a caseworker. This design reduces manual workload for routine arrivals. It makes processing efficient but makes the quality of automated filters and their oversight central to the fairness of decisions. ETA applications are currently being processed in a matter of hours.
However, the automated applications are only as good as the data on which they rely. Where digital Home Office records on applicants contain errors, this can lead to incorrect refusals of ETAs. Lawyers have highlighted examples of clients being refused ETAs for historic visa refusals, which should have been considered immaterial to their applications. Where automated decision-making processes rely on erroneous data, the risk of unjust refusals of ETAs remains considerable.
Lawyers have also highlighted issues of proportionality. ETA rules mandate refusal of applications where an applicant has a historic criminal record, regardless of the time that has elapsed since the conviction. Due to the inflexibility of the relevant rules, applicants can face a lifetime ban from the UK for a criminal history from many decades ago.
There is currently a ‘grace period’ in place for enforcement of ETAs, during which the requirement for individuals to have an ETA before travelling is slightly relaxed: travellers may board a flight provided they have submitted an application, regardless of whether a decision has been made on it. Once ETAs are fully in force, which should take place in 2026, although an exact date has yet to be confirmed, knowingly travelling to the UK without an ETA in place may be a criminal offence. The stakes for reliable automated decision-making at this stage will therefore be greatly increased. The Home Office is consequently under pressure to resolve issues in its data and decision-making in advance of the full enforcement of ETAs.
AI in immigration enforcement
The AI-powered Identify and Prioritise Immigration Cases (IPIC) system now plays a central role in certain Home Office enforcement processes, managing the cases of roughly 41,000 people facing potential removal from the UK. Whereas earlier tools functioned more like basic sorting algorithms, IPIC marks a major departure, reflecting a deeper integration of AI into immigration decision-making.
The system draws on extensive personal data to produce its outputs, reportedly including personal information such as biometrics, ethnicity, criminal records, and even live GPS location data. This represents a clear escalation in the use of AI at scale in immigration control.
Critics, including Privacy International, have raised concerns about the breadth of personal data collected, the opaqueness of how cases are prioritised, and the risks of reinforcing bias against certain groups. Civil liberties organisations have also argued that relying on such wide-ranging datasets for automated decision-making could undermine fairness, accountability and the right to privacy.
Digitalisation of immigration documents: what’s new
The Home Office’s drive to replace physical documents with digital records (often described as eVisas) is now complete for most immigration routes and in most circumstances, marking a significant change in the immigration landscape over the last few years.
The Home Office continues to push for an end-to-end digital process, with automated identity-checking services (using digital scans and photographs by way of smartphone apps) increasingly replacing in-person immigration appointments. Applicants applying from overseas to enter the UK under most work and study routes receive an eVisa instead of a physical vignette in their passport.
The delivery of this digitalisation agenda has been generally successful, however technical issues and glitches have been widely reported, and have left some UK residents at risk of losing the ability to prove their status in the UK, leading to an inability to secure work, accommodation or entry to the UK. Campaign group The3million’s periodic reports indicate that technical eVisa errors remain widespread. Other serious errors continuing to be reported by legal practitioners include clients with digital-only status being unable to enter the UK, or facing lengthy questioning at the border, due to Home Office systems not being able to access their eVisa records. Legal practitioners have argued that such cases highlight the danger of relying on flawed digital systems without adequate safeguards or physical backups in place.
AI in Home Office decision-making
In terms of public announcements, there is currently little to indicate that the Home Office uses AI to complete complicated processes such as making decisions on UK visa or asylum applications. However, in 2025 the Home Office undertook a trial of the use of AI to simplify decision-makers’ workflows, titled ‘Evaluation of AI trials in the asylum decision making process’. This included two AI-powered tools, which summarise asylum seeker interview transcripts and find and summarise third-country information. The Home Office stated that these tools were designed as an aid for decision-makers to improve efficiency but do not, and cannot, replace any part of the decision-making process. It remains to be seen if and when AI will start to play a part in automating complex functions such as decision-making.
AI use by lawyers: market and tools
AI tools for legal research and application support continue to expand. Commercial legal platforms (eg, established legal research services and specialist nationality/eligibility checkers) have enhanced the services they provide with natural-language tools and automated eligibility assessment.
These products can speed up assessment and legal research, automate administrative tasks and even undertake legal drafting. However, these tools are not yet sufficiently sophisticated to replace humans. From a regulatory and legal standpoint, human oversight remains essential to meet regulatory, evidential and confidentiality obligations.
Where used appropriately, the development of AI tools in legal advice have the potential to make the provision of immigration legal services more efficient and therefore more affordable for migrants and their families. However, there are significant risks where they are used inappropriately. Recent tribunal cases have highlighted how machine-generated legal submissions in immigration cases risk containing hallucinations and fictional authorities, not only causing reputational and professional damage for lawyers but also the likely refusal of vulnerable clients’ appeals.
Conclusion
The integration of AI into the UK’s immigration system may prove to be a profound shift in how migration is administered and enforced. While automation promises efficiency, whether in processing ETAs or streamlining case-working processes, it also magnifies the risks of error and lack of accountability when decisions rely on flawed or opaque data. While genuine machine learning tools remain limited in their use within Home Office systems, in the future complex processes requiring human intelligence, such as deciding whether to approve or refuse a visa application, may be increasingly automated.
Similarly, AI-assisted tools in legal practice hold real potential to make advice more accessible, but only where they complement rather than replace the judgement of trained professionals. As the UK moves towards full enforcement of ETAs and deeper reliance on machine learning systems, questions of transparency and human oversight will become increasingly urgent. As AI reshapes the immigration system, legal professionals must lead efforts to ensure that technology serves justice, not undermines it.