The Marketing Centre AI Usage Policy

September 2024

Introduction

At The Marketing Centre, we are committed to leveraging Artificial Intelligence (AI) to enhance our services while upholding the highest standards of ethics, transparency, and data security. This policy outlines our approach to AI usage, ensuring that our practices align with both our organisational values and the expectations of our clients.

Purpose

This policy is designed to:

  1. Promote the ethical and responsible use of AI across our operations.
  2. Ensure full compliance with all relevant laws and regulations, including GDPR.
  3. Protect the privacy and security of data entrusted to us.
  4. Provide clear guidelines for the use and oversight of AI tools within our services.

Scope

This policy applies to all aspects of AI usage within The Marketing Centre, including AI functionalities embedded in both emerging generative AI platforms (such as ChatGPT and Gemini) and established software applications like HubSpot, Trello, and Microsoft Office Suite (e.g., MS Word, Excel). It covers all employees, principals, contractors, vendors, and partners (hereafter referred to as "team members") who interact with these technologies in the course of their work with us.

Ethical Principles

  1. Transparency: We are dedicated to ensuring transparency in our AI practices, clearly communicating when and how AI is utilised in our decision-making processes.
  2. Fairness: We strive to ensure that our AI systems are fair, free from biases, and do not result in discriminatory outcomes.
  3. Accountability: All team members, including principals and employees, are responsible for the ethical use of AI, ensuring that human oversight is maintained.
  4. Privacy: We are committed to protecting personal data in line with GDPR and other relevant regulations, ensuring that data privacy is a cornerstone of our AI usage.
  5. Explainability: We ensure that AI-driven decisions are explainable, providing clarity and justification to our clients and stakeholders. 

Data Management Practices

  1. Data Quality: We use accurate, representative, and high-quality datasets to inform our AI systems, ensuring reliable and valid outputs.
  2. Data Security: We implement robust security measures to protect the integrity and confidentiality of data processed by our AI systems.
  3. Anonymisation: We anonymise data wherever possible to protect individual privacy and comply with data protection regulations.
  4. Data Retention: Data is retained only for as long as necessary to fulfill its intended purpose, after which it is securely deleted or anonymised.
  5. Confidentiality: We ensure that no confidential or sensitive company or client data is shared with AI tools unless it is essential for the task, and appropriate security measures are in place. Our team members are trained to verify that AI tools comply with our stringent data protection policies before use.

AI Tool Usage

  1. Responsibility: Our team members are responsible for evaluating and using AI tools in a manner consistent with our ethical guidelines, data privacy requirements, and security standards.
  2. Technology Assessment: We carefully assess AI functionalities within both generative AI platforms and established software applications to ensure they are used appropriately and ethically, and that they meet our high standards of data protection.
  3. Validation: We rigorously validate AI-generated outputs, particularly those involving sensitive or client-related data, to ensure their accuracy, fairness, and compliance with our policy.

 

Human Oversight

  1. AI as a Support Tool: We use AI to support, not replace, human decision-making. Our team members remain integral to the decision-making process, ensuring that AI enhances rather than supplants human judgment.
  2. Review of Significant Decisions: All significant decisions affecting clients or stakeholders are subject to human review and approval, ensuring that the final decision aligns with both our ethical standards and client expectations.

 

Compliance and Governance

  1. Regular Review: Our team members regularly review AI usage within their work to ensure compliance with this policy, and to adapt to any changes in technology or regulations.
  2. Incident Reporting: We have established clear protocols for reporting any concerns or breaches related to AI usage. Team members are required to report these to the Compliance Officer within The Marketing Centre or directly to the client, as appropriate, ensuring that issues are promptly addressed and resolved.

 

Monitoring and Continuous Improvement

  1. Ongoing Monitoring: We continuously monitor our AI systems to detect and mitigate any biases or errors. This proactive approach allows us to maintain the highest standards of quality and reliability in our AI-driven processes.
  2. Policy Updates: This policy is reviewed and updated annually or as necessary to reflect technological advancements, changes in legal requirements, and the evolving needs of our clients and stakeholders.

 

Communication and Training

  1. Responsibility for Communication: Our team members are responsible for reviewing any changes to AI functionalities within both generative AI platforms and established software applications to ensure they comply with this policy.
  2. Training: Our team members regularly attend training provided by The Marketing Centre to stay informed on how to use AI responsibly and in compliance with this policy.

 

The Marketing Centre is dedicated to using AI in a way that benefits our clients while upholding our commitment to ethical practices, transparency, and accountability. By adhering to this policy, we ensure that our use of AI technologies is not only effective but also aligned with our values and the high standards expected by our clients.

Should you have any questions regarding this policy, please contact us by emailing privacyteam@themarketingcentre.com.