AI and employment law gaps: three ways businesses can navigate current and future challenges

By David Banaghan, Co-Founder and Interim CEO at Occupop

According to Government research, one in six organisations now uses AI technology, transforming how businesses operate and make decisions.

The IT and Telecommunications sector blazes a trail with an adoption rate of 29.5%, closely followed by the Legal sector (29.2%), while hospitality (11.9%), health (11.5%) and retail (11.5%) are the slowest to embrace the new technology.

One reason for this slower uptake could be that businesses simply don’t know how best to use AI. At present, there are no rules specifically legislating the use of AI at work.

David Banaghan, Co-Founder and Interim CEO at Occupop, recruitment software experts, explains: “Currently, the regulation of AI in the workplace is primarily guided by existing employment legislation, much of which predates the iPhone era. This means it’s not tailored to accommodate the complexities of AI integration.

“While businesses that successfully implement AI can emerge stronger and more competitive, it’s important to embrace its potential while upholding ethical standards and legal compliance.”

Here are some tips for success.

Employers must decipher how AI intersects with laws

Unlike other emerging technologies, AI lacks its dedicated legal stipulations.

This means employers face uncertainty when using AI systems as they grapple with outdated legal frameworks.

Existing laws touch on aspects relevant to AI, such as privacy, discrimination, and data protection. However, their application to AI scenarios requires interpretation. Breaches can lead to large fines and lasting reputational damage.

Employers must decipher how AI intersects with laws like the General Data Protection Regulation (GDPR), Equality Acts and the Human Rights Act. This complexity demands vigilance and legal expertise, which will likely dissuade certain businesses.

Responsible AI deployment requires ongoing audits and assessments

While the AI market may be worth £16.9 billion to the UK economy, its successful implementation is not without its challenges.

Here are some potential common pitfalls:

Discrimination and bias: One study into AI projects predicted that 85% would deliver erroneous outcomes due to bias in data, algorithms or the teams managing them.

Many large-language models, such as Chat GPT, are only programmed until 2022 and are known to crawl data from unknown, disreputable sources. This can lead to the production of racist, sexist, or other offensive material.

Data protection and privacy:  76%of CEOs said they were concerned about a lack of transparency and its effects on adoption.

AI interacts with personal data, meaning employers must adhere to data protection laws (such as GDPR) when implementing AI systems in the workplace.

Employees have rights regarding their data, and compliance is non-negotiable.

The ethics of AI: While AI law is a grey area, employers must demonstrate ethical AI practices. Fairness, accountability, and transparency are all essential when using the tools.

Responsible AI deployment requires ongoing audits and assessments. AI systems should be regularly evaluated for ethical compliance and compliance with industry best practices where appropriate.

Businesses need to proactively address how AI impacts in their day-to-day use

While legislation catches up with AI’s rapid evolution, businesses must proactively address its impact in their day-to-day use. Here’s a checklist for business leaders:

Stay Informed: Continually update knowledge about AI advancements and their implications.

Collaborate: Engage with legal experts, industry peers and policymakers if you’re unsure where you stand legally.

Adapt Ethical Principles: Align AI practices with your own organisational values and retain a people-led approach.

Monitor Developments: Read web articles, white papers, and social media posts to stay informed about emerging AI regulations.

FAQs

  1. How can businesses ensure ethical AI deployment?
    To ensure ethical AI deployment, businesses should focus on fairness, transparency, and accountability in their AI practices. This involves regular audits of AI systems to identify potential bias or ethical concerns, especially in decision-making processes that could affect employees or customers. Businesses must also ensure compliance with data protection laws such as GDPR, which requires that AI systems handling personal data do so responsibly and securely. Aligning AI initiatives with the company’s ethical standards and organizational values, and maintaining a people-centered approach, helps companies avoid reputational damage and foster trust.
  2. What laws currently apply to AI use in the workplace?
    While there are no laws specifically designed for AI, existing regulations provide guidance for businesses using AI systems. For example, the General Data Protection Regulation (GDPR) governs how AI interacts with personal data, requiring transparency and consent in data usage. The Equality Acts protect employees from discrimination, meaning that AI systems must be designed to avoid bias or unfair treatment. The Human Rights Act also applies, ensuring that AI tools respect the basic rights of individuals. However, since these laws were not created with AI in mind, their application can be complex, requiring careful interpretation and legal expertise to ensure compliance.
  3. Why are some industries slower to adopt AI?
    The slow adoption of AI in industries like hospitality, retail, and healthcare is partly due to a lack of understanding of how best to integrate the technology. Many businesses in these sectors are uncertain about how AI fits within current legal frameworks, and fear of violating data protection laws or accidentally introducing biased AI systems adds to their hesitation. Additionally, these industries may have less immediate access to resources for AI implementation or lack the expertise to navigate the regulatory landscape. As a result, the complexity and uncertainty surrounding AI regulations act as barriers to wider adoption in these fields.