AI Update

Artificial Intelligence Update

As we continue our exploration of AI regulation and its implications for the legal profession, we had the pleasure of attending this year’s Society for Computers and Law AI Conference. Let’s unpack some of the insights from the day.

Key Takeaways from SCL Event

  • The UK has an opportunity to play a significant role in global AI regulation, balancing innovation and responsible development (Lord Holmes).

  • AI is transforming legal service delivery, with opportunities in automating labour-intensive processes and improving client service.

  • Law firms and in-house legal teams need to adapt their business models to maximize the value of AI.

  • There’s a growing need for lawyers to upskill in tech and business to provide better client service in the AI era.  One example shown a firm successfully implementing AI in invoice narrative review, significantly reducing review time and improving accuracy.

  • Explainable AI is crucial to build trust.

  • Business leaders need to focus on short-term and long-term AI strategies, ethical considerations, and alignment with AI regulations.

  • Top-down leadership and corporate culture are essential for successful AI adoption in businesses.

  • There’s a need for better governance and education around AI use in the workplace to manage risks and data security.

  • The legal community should be more involved in public debates about AI and its regulation (as commented on by Prof. Richard Susskind).

  • The global regulatory landscape for AI is evolving.

  • Lord Holmes’s AI Regulation Bill proposes principles of trust, transparency, inclusion, innovation, interoperability, and accountability for AI governance.

Keynote Address

The day began with a keynote address by Lord Holmes setting the tone for the discussions on AI regulation and its implications. Lord Holmes, a life peer in the House of Lords and a prominent figure in technology policy, outlined his work on AI regulation in the UK Parliament. Lord Holmes’s keynote not only emphasised the need for regulation but also highlighted the economic significance of AI. He cited a striking statistic on AI:

“PwC, say, 15.7 trillion … to global GDP year on year by 2030.”

This economic potential underscores the importance of getting AI regulation right - balancing innovation with ethical considerations and public safety. Lord Holmes says:

Reasons to legislate, reasons to get right size regulation, …which is always good, certainty, clarity, innovation, inward investment. Good for markets, good for me, good for you. I said, principles based, outcomes, focused inputs … well recognised by anybody who's done stuff in the legal field, trust and transparency, inclusion, innovation, interoperability and international approach, accountability, assurance, accessibility, good principles….

we have an extraordinary opportunity, but opportunities are only good if we take them, it's beholden on all of us to keep that pressure on [UK] government, to do something, to act, because we have such an opportunity still in the UK, and we owe it to all of our citizens, all of our communities, all of our cities, all of our businesses, to do something to enable the opportunities to come through, which won't just of themselves”.

Global Regulatory Landscape

There were panels and discussions in the room and during the breaks, culminating in a session on the legal and regulatory landscape.

The session provided an overview of global regulatory approaches:

  1. Chinese Regulatory Approaches: China’s influence on AI supply chains is very evident, along with their regulations on algorithms, deep fakes, and generative AI.

  2. US and UK Regulatory Approaches: The market-driven approach in the US was contrasted with the evolving landscape in the UK. Various state-level AI laws in the US were mentioned (e.g., Colorado).

  3. EU AI Act: The discussion naturally touched on the EU AI Act, focusing on high-risk AI and the delay in the introduction of international technical standards.

Challenges for In-House Teams

The challenges of building regulatory compliance for startups and established institutions were addressed. For startups, the focus was on managing limited resources while meeting regulatory requirements. For established institutions, like JP Morgan Chase, a concentric circle approach to AI regulation was described, emphasising (internal company) cross-disciplinary collaboration - something that many of you may find useful in your own organisations.

Need for Mindset Shift

The discussions underscored the need for a mindset shift among lawyers. The lawyers of the future will need to be not just legal experts, but also (if not already) tech-savvy business advisors.

Opportunities and Support

As you navigate the complex world of AI regulation and adoption, remember that you’re at the forefront of a transformative period in legal practice. The challenges are significant, but so are the opportunities. By staying informed, adaptable, and forward-thinking, you can help your organisations not just comply with AI regulations, but thrive in this new landscape.

At RMOK Legal, we specialise in helping legal departments navigate the complexities of AI adoption and compliance. We are ready to assist you in:

  • Developing AI Strategies: Crafting short-term and long-term AI strategies that align with your business goals and regulatory requirements.

  • AI Contract Terms:  Crafting and reviewing new AI contract provisions.

  • Upskilling Your Team: Providing training and resources to ensure your legal team is equipped to tackle, for example, AI literacy (Feb 2025, EU AI Act).

May your AI initiatives be compliant, your algorithms be ethical, and your legal tech stack be ever evolving!

Previous
Previous

New Regulatory Innovation Office to Accelerate Tech Approvals in the UK

Next
Next

Brazil v X (formerly Twitter)