JUNE 24, 2024

Avid AI Guidelines Establish Responsible AI

Abstract blue image

AI has ignited a sea change in society. While technologies utilizing AI and machine learning (ML) have existed for some time, it was the introduction of ChatGPT that ignited the possibilities of this being the most transformative technology in decades with the potential to impact all aspects of our lives. This could change the way we live, learn, and work which is why we established Avid AI guidelines.

The media and entertainment (M&E) industry sits directly within the eye of the AI storm providing the opportunity to minimize mundane, tedious tasks in the content production process. While this sounds ideal, there are downsides, such as eliminating roles and potentially replacing human creativity and experience for derivative technologically created content.

More efficient workflows and a faster rate of delivery provides M&E companies with a more cost-effective means of content delivery, while potentially risking stale, uninspired storytelling in the future.  The risk has been taken so seriously within the industry that it has contributed to two worker strikes (writer and actor) intended to protect the rights of creatives.

Headshot of Kevin Riley
Avid CTO, Kevin Riley


The advent of new technology almost always creates risk of irresponsible or malicious use. The more we explore AI at Avid the more potential we see for it to support creatives in accelerating content delivery, decreasing costs and improving the creation experience. By freeing them from the mundane tasks, they can focus on that creative spark, that lightbulb moment, where they have the vision to inspire and amaze their audiences with their stories. However, we are quite aware of the potential damage if not applied thoughtfully.    

Avid’s customers, partners, employees, and some government bodies have expressed concerns about the social implications of AI usage. Avid’s responsible AI policy takes a proactive and responsible approach to the use of these tools and technologies for both internal and external applications. Our policy aims to provide guidelines and practices on the responsible use of AI based on a set of fundamental principles. This has a focus on values that ensure the development and deployment of AI safely, ethically, and in compliance with expanding regulations in various regions and jurisdictions. There have already been moves by the EU and government agencies within the United States to formulate regulations around AI.

The responsible AI position for Avid will affect commercial offerings in two primary ways. Avid will strive towards exposing usage that benefits our users, versus seeking to replace them. Secondly Avid will expose AI in such a way that decisions made about, or content generated through AI will finally be owned and curated by the users. The fundamental principles of Avid AI guidelines in the deployment of AI are:

  • Safety: Ensure the security of the end-user and the company. 
  • Privacy: Ensure privacy for data belonging to the end-user and the company.  
  • Fairness: Result in outcomes that treat all people fairly, without discrimination, empowering and engaging everyone. 
  • Reliability: Consider context, resulting in outcomes that accurately model performance and uphold data quality, ensuring the proper functioning of the AIs during their lifecycle. 
  • Transparency: Provide an explanation for proper use that is understandable by any end-user, with facilities to trace and audit where the results come from. 
  • Accountability: Result in outcomes that comply with expanding standards and legislation in all applicable regions and jurisdictions. 
  • “Human-in-the-Loop”: Benefit the creator by supporting and assisting — but not replacing — human decisions in the creative process. 

This final principle is critical. Always ensuring that the ultimate decision on the use of content lies in the hands of those who are creating it, even if they are utilizing aspects of AI or ML to do so. This impacts all areas of content creation but has significant implications when it comes to news. There are already established use cases where content is generated by AI means — some financial reporting or election results coverage for example — but the final decision to publish still rests with the producer, who has the ultimate say.

Avid Summary Engine
Workflows such as transcription or translation are possible with AI

It is also important to recognize that some AI technologies may run locally on a machine, while others may require a connection to a cloud-hosted service. Transparency about which method is being used to enable a specific feature or capability is key so it is clear to any customers what is involved. At Avid, we are incredibly conscious of the amazing work of our customers and the enormous value in the Intellectual property of their media assets. This is why we have implemented the policies of safety and security to ensure the protection of content at all times through the production supply chain.

To expand on this further, let’s discuss how Avid is taking a risk-based approach to these issues.

Avid will use the risk-based framework to identify risks to Avid and to our customers when using particular AI solutions.

Risks include exposure to security threats, or unanticipated intellectual property exposure. Risk is categorized using the dual measure of probability and severity, with mitigation measures and prioritization dictated by this measure.  

  • Human ownership of AI decisions: AI technology is positioned as assistive in nature. Care should be taken to not put AI in the driver’s seat—make it a co-pilot and not the pilot., with the journalist or the editor the one with ultimate decision-making power.
  • Attribution: The media industry needs to be sure that data and models we are using are fully transparent. Part of our responsibility is that we use models which are fully explained, understandable, and trained using lawfully obtained and approved data and content. 
  • Fairness and reliability in the training process: AI projects need to ensure best efforts that the models developed, used, and trained are fair, generating results that are free from discrimination and bias. IP and data belonging to Avid, and our customers, will not be exposed for training purposes without explicit permission.  
  • Verity checks: Based on common accepted practices, detection of false or hallucinatory information is extremely difficult, however, establishing origin and provenance (where the data came from) is less so. Avid uses methods such as watermarking or trust certificates to develop tools. 
  • Full disclosure: This is a shared responsibility for both Avid and our customers to be clear about when AI is used in our solutions, and also a customer should be clear to their audiences too.
  • Collaboration: As customers and our industry work to figure out the pitfalls, benefits, and reliable use of AI, Avid is taking a leading role to share knowledge and seek constant feedback from customers, technology partners, and stakeholders through lab style prototyping efforts, targeted customer and partner forums, and through conferences and trade publications.

Another aspect of responsible AI is the so-called “co-pilot” concept, where the AI/ML technologies are assisting the user in doing their job, helping them work more efficiently. It is there to help, not replace. We have already demonstrated, at recent trade shows and events such as IBC in Amsterdam and NAB in Las Vegas, some ways in which this kind of approach could benefit our users.

Screenshot of Media Composer on a Monitor
ScriptSync in Media Composer
  • Kevin Riley

    Kevin brings experience in SaaS and subscription business and delivery models as well as cloud architectures. Kevin is responsible for Avid's technology vision and innovation strategy and is helping to steer the company's top-to-bottom technology strategy to support its digital transformation.

  • © 2024