AI Australia: How Boards Manage AI Risks and Ethics
Share this article:

Artificial Intelligence (AI) is advancing swiftly and is increasingly integrated into daily applications and decision-making. The ethical considerations, governance, and regulation of AI are becoming increasingly significant for legislative bodies.

The UK is working on regulation towards AI, but the process has been pushed back from this year to the next. There are no current general regulations of AI in the UK, but there are laws that touch on AI regulation in practice such as the GDPR

So what does the AI regulatory landscape look like in the UK, and where is it heading?

What are the AI Regulations in the UK?

The UK Government’s current approach to AI regulation focuses on establishing AI principles rather than rigid legislation. They are currently letting existing regulators address AI-related risks, challenges and opportunities, making their approach more context-specific than a general legal framework.

There are no specific AI laws yet, but the government is relying on existing regulations like the GDPR and the 2018 Data Protection Act to cover rulings on AI’s use of personal data.

The government has been preparing an AI Bill but no draft has been released as yet. In the absence of the AI Bill, guidance can be found in the Government’s 2023 Whitepaper, updated in February 2024. It signifies an initial move by the UK to establish a dedicated national AI strategy and framework for the ethical development and application of AI.

The key aspects of the Whitepaper are:

  • Five values-focused cross-sectoral principles for regulators to interpret and apply within their respective areas, intended to promote responsible AI use.
  • No new AI regulator, instead existing regulators will guide AI development.
  • Central support via a steering committee to coordinate and monitor regulators.

In comparison to the UK, the European Union’s EU AI Act came into force in August 2024, which takes a more active regulatory approach. This Act involves classifying AI applications into risk levels, each with associated legal responsibilities and substantial fines for inappropriate use. 

The United States has adopted a more business-driven and industry-led strategy, while their federal government has also been instructed to adhere to essential guidelines for the responsible application of AI.

In January 2025, the government released its AI Opportunities Action Plan aimed at accelerating AI integration throughout the UK. Major initiatives feature the establishment of AI Growth Zones to enhance infrastructure, a twenty-fold increase in public computing capacity, and the creation of a National Data Library to leverage public data for AI advancement.

Alongside this, a Private Members’ Artificial Intelligence (Regulation) Bill was introduced to the House of Lords in March 2025. The Bill would require the creation of an “AI Authority” – a new regulatory body that would regulate AI according to the approach outlined in the Bill.

What is the current state of the upcoming AI Bill?

Plans to regulate AI have been postponed for over a year, as UK officials prepare an extensive legislation to oversee the technology and its handling of copyrighted content.

The Technology Secretary, Peter Kyle, aims to introduce a ‘comprehensive’ AI bill in the next parliamentary session to address concerns surrounding copyright and safety.

Labour had planned to introduce a narrowly-drafted and shorter AI Bill within months of their new government. The focus of this would have been on Large Language Models (LLMs) such as ChatGPT, necessitating that companies submit their models for evaluation by the UK AI Security Institute.

The passing of this legislation was postponed as ministers opted to coordinate with Donald Trump’s administration in the United States, fearing that any regulations could diminish the UK’s appeal to AI firms. The UK Government is trying to align with the US with its ‘pro-innovation’ approach to AI.

Now, ministers are aiming to incorporate copyright regulations into the AI Bill. In June of this year, an amendment was backed in a new data bill that would require AI companies to disclose if they were using copyrighted material to train their models.

Why is it important to understand AI and its regulations?

AI serves as a potent tool for enhancing oversight and decision-making, but also as a source of risk that requires careful management. Keeping track of AI regulations ensures not only that your company avoids any potential penalties for their usage of AI, but also helps to establish the best possible AI practices in your organisation.

There are different kinds of risks all organisations face when incorporating AI into their processes, including ethical and reputational risks, cybersecurity threats, regulatory compliance risks, and data privacy concerns.

Specifically in Board meetings, AI can bring several potential risks for organisations:

Data Privacy Breaches: Improper handling of data or unauthorised access could lead to leaks of confidential boardroom information.

Cybersecurity Vulnerabilities: AI systems can be targeted by hackers exploiting vulnerabilities in software, hardware, or cloud services.

Inadequate Data Governance: Poorly managed data flows between AI systems and external servers could result in unintentional sharing of confidential data.

Lack of Explainability: If an AI system generates outputs without transparency, it may inadvertently reveal or misrepresent sensitive board discussions.

Vendor Reliance and Third-Party Risks: Third-party AI vendors might not have robust security measures in place, leading to potential data leaks.

Data Persistence and AI Training Risks: Some AI systems retain input data for training, creating a risk of unauthorised reuse.

This is why it’s important to ensure your AI tools are secure and compliant, as no new innovative tools are more important than security. You should look for AI platforms and tools that you can trust with your data, and with your meetings.

Boards need to make sure the tools they use do not compromise any of their data. Confidentiality and security is non-negotiable for Board management.

Convene and AI

From making the agenda, to compiling the Board pack, to taking meeting minutes, Board meeting processes can be fragmented and incredibly time-consuming. In this continually evolving corporate landscape, it’s important that governance procedures are as effective as possible.

This is why purpose-built Artificial Intelligence tools are rapidly being introduced into the Boardroom. These AI tools are helping executive teams to free themselves from time-consuming admin tasks and enabling directors to focus on making those crucial Board decisions.

From scheduling meetings, to summarising key meeting points, AI tools can streamline processes and heighten engagement.

Convene is committed to supporting organisations as they adopt AI technologies while maintaining the highest standards of security and compliance. Our approach to AI prioritises private deployments, strict data lifespan management, GDPR compliance, and AI-specific non-disclosure agreements. 

Our AI functionality will be hosted through Amazon Bedrock, ensuring that data remains securely within a private cloud environment and is never used for external model training. Your information will remain confidential and is used solely to optimise services for each organisation. We are also fast at work to ensure we will be compliant with the EU AI safety Act.

Here’s an overview of the upcoming AI-driven features designed to further support governance teams:

For Administrators:

  • Automated meeting summaries capturing key points and decisions.
  • Quick access to key discussion points.
  • Suggested action items generated from meeting content.
  • Immediate production of refined meeting minutes for fast approval.

For Directors:

  • Concise content summaries alongside board packs.
  • Ability to query documents and reports through a chat-based interface.
  • Access to historical information for reference.
  • AI chatbot support to assist in navigating meeting materials and reports.

Share this article:

Charlotte Wright
Charlotte Wright

Charlotte works as a Content Writer at Convene.

  • Connect:
  • Linkedin Account
  • Email Account

Subscribe to the Convene blog

Get regular updates on Governance and Digital Transformation!