Subscribe
About

Surfing the AI wave: Key roles and questions for organisational success

By Aysha Sassman, Software Quality Assurance Specialist, iOCO.
Surfing the AI wave. (Image: iOCO)
Surfing the AI wave. (Image: iOCO)

Imagine standing on the edge of a vast ocean, ready to catch the perfect wave. This is what embarking on an AI journey feels like – thrilling, challenging and full of potential. Just as a surfer needs the right skills and equipment to ride the waves, organisations need a team of skilled professionals to navigate the complexities of AI implementation. Let's dive into the key roles that will help your organisation catch the perfect AI wave and ride it to success!

Why is it crucial to identify and involve specific roles in the successful implementation of AI within an organisation

Not defining roles in AI implementation can lead to several negative consequences. Without clear roles, teams may struggle to understand their accountability, resulting in confusion and inefficiency. Collaboration between different functions can become chaotic, leading to miscommunication and delays. Ensuring data quality, model performance and ethical standards becomes challenging, which can result in poor quality AI solutions. Undefined roles can also lead to overlooked compliance and governance issues, increasing the risk of legal and ethical violations. Additionally, resources may be misallocated, wasting time and effort. Ultimately, AI initiatives may fail to align with business objectives, resulting in ineffective solutions that don't address key business needs.

I’d like to highlight the key groupings and roles that should form part of the team and the key questions that should be answered below. This will enable the creation of a 90-day evolving roadmap. Also, I would advise you not to aim for perfection but rather power up at each 90-day milestone on the roadmap and review lessons learnt.

AI governance and strategy team

This team is accountable for reading the waves and setting the strategic direction and ensuring governance, compliance and AI ethics is effective. This team will also be managing risks with implementation and establishing a risk framework.

Key questions:

  • Chief information officer (CIO) Accountability: Strategic alignment of AI initiatives with business goals, governance, compliance and technological capabilities.

Key questions:

  • How does AI integration align with our overall business strategy?
  • What governance frameworks and compliance standards need to be established?
  • How will we monitor and report on AI project progress and outcomes?
  • How can we ensure cross-departmental collaboration and communication?
  • AI strategist Accountability: Identifying AI use cases, developing implementation roadmaps and defining success metrics.

Key questions:

  • What are the most relevant AI use cases for our business?
  • How will we prioritise AI initiatives based on impact and feasibility?
  • How will we conduct market research and stay updated on AI trends?
  • How can we develop and present business cases for AI projects?
  • AI architect Accountability: Designing the overall structure of AI systems, ensuring scalability, reliability, and security.

Key questions:

  • How will we design AI systems to align with business objectives and technical requirements?
  • What architectural patterns will we use to ensure scalability and security?
  • How will we oversee the integration of AI systems with existing infrastructure?
  • How can we ensure compliance with data privacy and security regulations?
  • Ethics compliance manager Accountability: Ensuring AI solutions are developed and used ethically, managing biases and ensuring transparency.

Key questions:

  • How will we ensure ethical AI usage and manage biases?
  • What measures will we take to ensure transparency in AI decisions?
  • How will we develop and implement ethical guidelines for AI development?
  • How can we conduct regular audits to ensure compliance with ethical standards?

What could go wrong?

AI implementation can bring significant benefits, but it also comes with risks, especially if not managed properly. The key risks that arise when AI is implemented incorrectly include:

  • Bias and discrimination: AI systems can perpetuate and amplify existing biases found in their training data, leading to discriminatory outcomes. For example, an AI recruiting tool developed a bias against female candidates due to biased training data.
  • Lack of transparency: Opaque "black box" algorithms can make it challenging to understand how decisions are reached, raising concerns about fairness, privacy and due process.
  • Over-reliance on AI: Users may overestimate AI capabilities, leading to dangerous outcomes. The Automotive Company Autopilot feature faced criticism due to several high-profile accidents where drivers treated it as fully autonomous, despite it being a level two driver assistance system that requires active supervision.
  • Misalignment with real-world needs: AI solutions may not align with real-world needs, leading to ineffective or harmful outcomes. AI Solution for Oncology faced issues due to misalignment between AI capabilities and medical needs. The system struggled with providing relevant and accurate treatment recommendations, eventually leading to failure.

AI data team

This is the team that ensures that good quality data feeds the ML models. We understand the importance of robust data management and security protocols is of critical importance, akin to a surfer's need for a reliable board and safety gear to ride the waves confidently.

Key questions:

  • Data scientist Accountability: Exploring and analysing data, developing machine learning models and ensuring data quality.

Key questions:

  • What data do we need to support our AI initiatives?
  • How will we ensure the quality and accessibility of our data?
  • How will we validate and refine machine learning models?
  • How can we communicate insights derived from data analysis to stakeholders?
  • Machine learning engineer Accountability: Developing, testing and deploying machine learning models, and optimising them for performance and scalability.

Key questions:

  • How will we translate AI prototypes into production-ready code?
  • What strategies will we use to optimise AI models for real-world scenarios?
  • How will we monitor and maintain the performance of deployed models?
  • How can we ensure the scalability and reliability of AI systems?
  • Data engineer Accountability: Managing data pipelines, storage solutions and ensuring data quality and accessibility.

Key questions:

  • How will we build and maintain data pipelines for AI models?
  • What steps are needed to ensure data quality and accessibility?
  • How will we manage and optimise data storage solutions?
  • How can we ensure data security and compliance with regulations?

Risks of data leakage

  • Exposure of sensitive information: Data leakage can lead to the exposure of personal data, including financial details, private communications and other confidential information.
  • Potential for malicious use: Exposed data can be used for extortion, blackmailing or can be sold on dark web platforms, posing severe threats to affected individuals and organisations.
  • Reputational and financial damage: Organisations suffering from data leakage can face significant reputational damage, loss of customer trust and financial repercussions.

What do the stats say about AI implementation and data leakage

  • Failure rate: Studies indicate that up to 85% of AI projects fail, primarily due to poor data quality. This high failure rate is often attributed to the use of flawed, incomplete or biased data sets.
  • Data leakage: By 2025, open source large language models (LLMs) are projected to exhibit data leakage at 52.5%. This statistic highlights significant privacy risks associated with these models.
  • Security concerns: A telecoms company internal survey found that 65% of respondents viewed generative AI as a security risk. This concern led the company to restrict the use of generative AI tools like ChatGPT among its employees.

AI development implementation team

After strategic direction has been set, the implementation team needs to ensure that solutions align with the strategy and that implementation is effectively co-ordinated and successfully delivered. Like a surfer navigating the waves with precision and balance, this team ensures that AI solutions are seamlessly integrated and developed.

Key questions:

  • Project manager or scrum master Accountability: Overseeing AI projects, co-ordinating between teams and ensuring timely and budget-compliant completion.

Key questions:

  • How will we manage resources and timelines for AI projects?
  • Which project management methodologies will we use?
  • How will we ensure effective communication and collaboration among teams?
  • How can we track and report on project progress and milestones?
  • Business analyst Accountability: Acting as a bridge between technical teams and business stakeholders, ensuring AI solutions meet business needs.

Key questions:

  • What are the specific business needs that AI solutions should address?
  • How will we ensure AI solutions align with business goals?
  • How will we gather and document business requirements for AI projects?
  • How can we validate that AI solutions meet stakeholder expectations?
  • Developer/programmer Accountability: Building AI solutions, ensuring technical feasibility and performance.

Key questions:

  • What are the technical requirements to address the business needs with AI solutions? This involves identifying the necessary algorithms, data sources, and infrastructure to develop AI solutions.
  • How will we ensure the AI solutions are scalable, maintainable and integrated with existing systems, while also meeting performance benchmarks and delivering measurable business outcomes.
  • How will we test and debug AI solutions to ensure functionality and performance?
  • How can we document and maintain code to support future development and scalability?

AI operations and quality

To ensure consistent quality and reduce risk with go live, the Continuous Quality Eco System needs to ensure that there is a level of confidence in the solution before pressing the 'go live' switch. Like waiting for the perfect wave, this team ensures that all conditions are optimal, and risks are minimised before taking the plunge.

Key questions:

  • DevOps engineer Accountability: Deploying, integrating and maintaining AI systems, ensuring smooth and efficient operation.

Key questions:

  • How will we deploy and integrate AI systems into our existing infrastructure?
  • What automation tools will we use to maintain AI systems?
  • How will we monitor and troubleshoot AI system performance?
  • How can we ensure continuous integration and delivery of AI updates?
  • Test team Accountability: Developing and implementing test strategies for AI systems, ensuring flexibility, transparency and security.

Key questions:

  • What test strategies will we use to ensure AI models are reliable and secure?
  • How will we address ethics and bias management in AI testing?
  • How will we validate AI models against real-world scenarios?
  • How can we ensure transparency and accountability in AI testing processes?

Call to action: Diverse teams drive successful AI journeys

As you embark on your AI journey, remember that success lies in the collaboration of a diverse and skilled team. By clearly defining roles and accountability, you can navigate the complexities of AI implementation with confidence and precision.

Just like a surfer needs to understand the waves, balance on the board and work with the ocean's rhythm, your organisation needs to identify and empower the right roles to ride the AI wave.

Don't wait to start building your AI dream team! Identify the key roles that will drive your AI initiatives forward and ensure each member understands their unique contributions. Begin with a 90-day evolving roadmap, focusing on continuous improvement rather than perfection. With the right team and strategy, you'll be well-equipped to ride the AI wave to organisational success.

Share

Editorial contacts