Agnetic AI Improving the Speed at which Vehicle are Designed and


As with all industries, agentic AI is already starting to have a dramatic impact on the automotive industry, particularly with regard to:

  • improving the speed at which vehicles are designed and tested in ways which also better reflect:
    • customer needs;
    • the need to design safer and more efficient vehicles;
    • the complex interactions between components in vehicles;
    • better thermal management of electric motors; and
    • multiple user preferences;
  • transforming and dramatically speeding up vehicle testing by simulating complex, real-world conditions, generating diverse scenarios and analysing vast amounts of data more efficiently than traditional methods.
  • developing increasingly autonomous vehicles with real-time perception and decision-making via the processing data from cameras, LiDAR, and radar;
  • enhancing in-vehicle experience by monitoring driver and passenger behaviours and proactively adjusting vehicle settings (navigation and climate control); 
  • developing advanced in car assistants that can offer predictive maintenance and diagnostic alerts and intelligence, concierge services and dynamic navigation, creating unique, personalised journeys and new revenue streams;
  • optimising smart manufacturing processes to improve manufacturing efficiency, productivity and quality control, including via the use of predictive maintenance of production lines, improving robotics coordination and implementing measures to reduce energy costs and waste;
  • optimising supply chains based on better predictive capabilities and dynamic routing;
  • developing more efficient business functions, such as in relation to recruitment and the management of staff;
  • creating more engaged and personalised customer dealings and experiences including more personalised vehicle configurations and settings;
  • streamlining financial processes such as loan approvals, which can lead to increased sales and profitability; and
  • better enabling the efficient management and operation of vehicle fleets.

In all of this, agentic AI is being utilised to accelerate innovation and development by bringing together specialised AI agents to simulate expert discussions and streamline workflows in a virtual environment. In so doing, these AI agents are designed to:

  • focus on a specific domain (including engine, manufacturing, powertrain, materials and regulatory);
  • provide human engineers with instant and expert‑level answers. 

This blog provides an overview of:

  • what is meant by “agentic AI”;
  • what are key legal risks to consider as regards the use of agentic AI;
  • how to mitigate the legal risks associated with the use of AI;
  • what are the key overarching issues to get right when procuring Agentic AI System; and
  • how do contracts need to be written to deal with agentic AI.

What is Agentic AI?

While the terms “agentic AI” and “AI agents” are often used together or interchangeably in casual conversation, there is a significant difference. AI agents are, in essence, autonomous, decision-making systems powered by artificial intelligence (modern AI agents commonly use LLMs as their “brains”). They can be thought of as specialised employees that are deployed to undertake particular functions. In contrast, agentic AI systems and multi-agent systems are more sophisticated systems that are designed to operate with a higher degree of autonomy for the purpose of achieving a wider set of objectives and goals; these systems act as an AI agent “conductor” or manager that deploys, coordinates and manages multiple agents.

Take the analogy of an orchestra, AI agents are individual musicians (i.e. violinist, cellist, flute, tuba, triangle, etc.) whereas the agentic AI system is the conductor responsible for managing the agents, shaping the sound and delivering the unique artistic vision. Here, agentic systems bring together specialist AI agents operate to collaboratively in a “big room” for the purpose of achieving a common goal, rather than working in isolation in individual silos.

In all of this, accessible, reliable and outcome-focused agentic AI will fundamentally transform how businesses in the automotive industry operate as organisations move from instruction-based deterministic computing to intent-based computing, whereby an individual can give an AI agent a desired outcome and the AI agent then works out the best way to achieve such outcome with only limited further human input required.

Here. a 2025 Accenture study predicts that, by 2030, AI agents (the building blocks of agentic AI) will be the primary users of most enterprises’ internal digital systems and the World Economic Forum anticipates that CEOs will soon be required to manage hybrid workforces of humans and intelligent AI agents.

Key Legal Considerations & Risks when using AI

Agentic AI deployments have the potential to deliver enormous benefits to organisations, including in terms of increased efficiency, productivity and operational capabilities. However, as with humans, agentic AI systems can make mistakes and due to their semi-autonomous nature, the pace at which they can complete tasks and the limited requirement for active human involvement, the risks are compounded and magnified. 

Key legal risks to be considered in respect of agentic AI deployments include:

  • Compliance with Laws: The regulatory landscape is in a state of flux as governments around the world tread the line between promoting AI whilst seeking to protect the public against potential risks. That said, new laws are being introduced in various countries and guidance is being regularly published on the application of current laws to AI. For example, in January of this year, the UK Information Commissioner’s Office published a report on the data protection implications of agentic AI, emphasising that organisations remain responsible for data protection compliance of the agentic AI that they develop, deploy or integrate into their systems and processes.

In addition to regulatory fines and other sanctions, organisations may be subject to civil claims for breach of relevant laws relating to the use of AI systems, including for e.g. product liability offences, IP infringement, defamation and discriminatory behaviour.

  • Contractual Liability: contractual liability may arise in connection with the deployment of agentic AI, including:
    • Execution of Contracts: AI agents may enter into contracts on behalf of organisations; in certain cases, the agent will be specifically deployed for such purposes (e.g. automated trading systems). However, due to the increasingly autonomous nature of such systems, there are risks that the system makes an unauthorized or incorrect transaction for which the deployer is liable.
    • Contractual Restrictions and Limits: Care will need to be taken not to exceed or otherwise breach any usage restrictions in contracts with customers or suppliers, including as agent ‘users’ are likely to process considerably higher volumes of transactions than human users.
  • Tortious Liability: an organisation may be exposed to tortious liability, including: (i) negligence (e.g. in respect of misleading information provided by a chatbot); (ii) nuisance (e.g. an out-of-control self-driving vehicle causes damage to property); and (iii) breach of statutory duty – see comment above.
  • Data Security Risks:In addition to regulatory compliance obligations and breaches, data security risks in AI systems include data leakage, data poisoning, model inversion, system manipulation and adversarial attacks (such as prompt injection), which can result in loss or corruption of information and data, unauthorised fund transfers, compromised model integrity, model theft, intellectual property infringements and operational disruption.
  • Intellectual Property Rights Disputes: AI-generated content may infringe intellectual property rights, including copyright materials and trademarks, leading to potential legal disputes. The widely reported case of Getty Images v Stability AI[1] (which is subject to appeal) is a notable recent example of the risks and challenges facing developers and rights holders. Additionally, ownership of AI-created works remains a grey area.
  • Director Duties: Under the UK Companies Act 2006, directors are required to exercise reasonable care, skill, and diligence and face potential liability for failures in governance or supervision of AI systems. This is complicated by the increasingly autonomous and “black box” nature of AI systems.

We explore these and other related points in our follow-on blog “What are the key issues to get right when procuring AI systems”.

Mitigating Agentic AI Risks

In light of the inherent risks in AI and the changing regulatory landscape, an holistic, top-down, approach is required to ensure responsible and informed use of AI agents (and AI more broadly) by organisations, including implementing clear internal governance controls, ensuring human oversight, building guardrails into AI systems by design (including clearly defined parameters regarding what ‘decisions’ the system can make) and having clear redress procedures where issues / complaints are raised.

In addition, just as specific cyber insurance policies are now ubiquitous as a consequence of increased data security risks, organisations may seek insurance protection for specific AI risks that are not covered by existing traditional policies.

What are the key overarching issues to get right when procuring Agentic AI Systems

When procuring Agentic AI, consideration of the following points will be critical to the success of the project:

  • As with any major transformative IT project, senior stakeholder buy-in and sponsorship of the adoption of the planned application of AI is essential.
  • Strong data sets are critical – as with any IT project, if the data to be analysed is poor, the outputs that will be generated will be similarly poor.
  • Businesses need to acknowledge the impact of AI on its staff – here, some evidence suggests that staff are more motivated by those businesses which have a clear strategy for AI and which, in turn, can see how the time spent by staff on current tasks can be redeployed to more interesting roles. 
  • The AI being developed should be “responsible”, meaning that the AI solution is contractually required to:
    • be based on morally strong ethics;
    • avoid bias;
    • avoid the creation of deep fakes and hallucinations;
    • take full account of the different risk classification levels in the EU AI Act
    • avoid privacy breaches.
  • Because of the speed at which AI has been developing, any project to develop AI technology will benefit from being done on an agile basis provided always that the contract contains appropriate and robust controls as to how change is managed on a continuing basis. 
  • The users of the planned AI System require comprehensive training, both as regards the capability of the AI system but also as regards the way in which the company expects its staff to use the AI in question in their roles.
  • Contract for AI in a smart manner that pays proper attention to the opportunities and risks that arise.

Contracting for Agentic AI Systems

When contracting for Agentic AI Systems, the following topics are worthy of special consideration:

  • Understanding what is being delivered
  • Intellectual Property
  • Confidentiality
  • Data usage rights
  • Security
  • Audit rights and transparency
  • Governance
  • Liability allocation
  • Emerging laws

Looking at these in turn:

What is to be delivered

  • Be clear about the use of AI and consider carefully the manner in which AI is to be used to deliver any services and/or technology.
  • For most users, the accuracy of any outputs and the extent to which a user can rely on the outputs of any AI system are critical. In so doing, contracts should be clear as to which party is responsible for verifying the outputs of any AI system.
  • A customer should expect clarity as regards how the AI model was developed.
  • Consider the extent to which the supplier is responsible for training the AI model.
  • Contracts should include clear commitments to the effect that the AI system does not use manipulative techniques, exploit vulnerabilities or engage in social scoring, untargeted facial recognition, predictions of criminality based on profiling and inferring emotions in workplaces,
  • Provide for the key tasks which should not be performed by AI. For example:
    • AI should not be solely responsible for decisions that directly impact human lives due to the risk of bias and lack of accountability;
    • AI should not process personal data in ways that violate privacy or violate the principle of “fairness”;
    • As AI cannot truly think critically or be creative; it should not be relied upon for tasks requiring nuanced, human-like, out-of-the-box, or original thought’Increased care should be taken as regards the use of AI systems in unpredictable environments.
    • AI should not be used in scenarios requiring genuine emotional intelligence, such as therapy or complex negotiations.

Intellectual Property Rights

  • AI depends, at its heart, on the software having been trained to think. In so doing, much of the material that will have been used to train many AI models will rely on content scraped from the internet which may, in turn, be protected by a range of IP rights (particularly copyright material). Here, the creator of the AI system may or may not have agreed any rights to use the material which was used to train its AI System. 
  • Then, when in use, AI tools create new content and the question of who owns the resulting content is best expressed in the contract whilst the law on this point catches up.
  • With that in mind, contracts should include:
    • a commitment from the supplier that it has all necessary rights to the data which has been used to train its AI model
    • a commitment from the supplier that any personal data is processed strictly in accordance with all applicable data protection laws.
    • a prohibition on using any client-supplied personal or confidential data to train, fine-tune, or improve AI models (whether proprietary or third-party) is important unless, in the case of personal data, it is anonymised and with specific prior written consent.
  • As the law of copyright regarding the ownership of AI generated works is evolving, contractual clarity on who owns and who is licensed to use the outputs of an AI system are critical. Here:
    • owners of AI systems want to be able to utilise all material that is inputted into their AI tools so that material is available to owner of the AI system as part of the learning of the AI system. Is this ok by the owners of the material being used to train the AI? Often, it is not.
    • users want to own whatever material the AI system creates as a result of the prompts of the user but often AI system owners to assert ownership rights over such material for use in its AI systems
    • as a matter of law, under current UK and EU law, the legal status of training AI models on such data without authorisation is unresolved, and case law is emerging
    • UK and EU copyright laws currently do not recognise AI-generated works as protectable unless a human can be identified as the author. Therefore, IP ownership terms for AI outputs must be clearly defined by contract, as default legal protection is uncertain.
    • uncertainty exists as to the first copyright owner – is the first copyright owner the user who prompts the AI system or is it the owner of the AI systems or is there insufficient human intervention for copyright to subsist.
    • indemnities may be used to share the risks of IP breaches by each party. .

Confidentiality

Most businesses have developed valuable trade secrets and methodologies applicable to their businesses. These materials are generally regarded as confidential know-how and trade secrets and typically not shared with either suppliers or customers. However, the deployment of AI may raise the risk of disclosure of confidential know-how and trade secrets and the creation of derivative works.

Supplier terms and conditions may include broad rights for the supplier to access, retain and use this confidential know-how not only for the customer’s AI project, but also other customers’ projects. For many users, this may be unacceptable and ways need to be found to protect the confidentiality and security of a customer/s confidential information, particularly when AI systems are provided using cloud solutions.

Data usage rights

AI systems rely on large datasets to train AI systems and improve its overall performance. In the context of the design, manufacture and sale of vehicles, much of the data will be operational or technical and, in all likelihood, will contain confidential information.

In addition, data will be personal data – not only that which relates those individuals who are customers and potential customers but also as regards information relating to customer preferences when using a vehicle. To the extent that any data is capable of identifying an individual, issues will arise as regards whether the use of that personal data has been properly authorised by the individuals concerned. In all of this, particularly as regards historic data, there will be risks that its use in connection with the operation and/or training of AI systems has not been authorised.

Security

Key issues to reflect in a contract include:

  • Ask for evidence of the supplier’s security standards (including measures to protect personal, sensitive or confidential data processed by the AI system and the manner in which such compliance is tested) and .
  • Consider if security standards are appropriate and satisfactory.
  • Include a contractual obligation on the supplier to:
    • notify the customer immediately of a security incident or data security breach.
    • stop using the AI system in certain situations, such as a cyber-attack.
  • Consider the extent to which an AI supplier be responsible for issues arising out of a cyber-attack on its systems

Audit rights and transparency

In the EU, any AI System that is classified as a “High Risk” AI system (being an AI system that may pose a significant risk to people’s health, safety or fundamental rights), must meet stringent transparency and explainability obligations. High risk AI systems extend to AI being used in relation to safety critical applications (such as in relation to automated driving systems) and in relation to biometric identification. In all such cases, the contracts to deliver such AI systems need to consider issues such as:

  • The ability of the AI system to log events over its lifetime so as to provide a traceable record of its performance
  • Any high-risk AI systems must be capable of human oversight to monitor its performance and to minimise risk – it remains to be seen how the developing safety standards for autonomous vehicles at Levels 4 and 5 will operate in this context.

In addition, data protection laws require transparency when any personal data is involved.

Companies contracting to use AI models should therefore ensure that:

  • Contracts include rights to audit, inspect, or request documentation demonstrating the compliance and performance of any AI system satisfies all regulatory obligations.
  • The supplier is able to provide traceability as regards its training sources, logic and decision-making processes if outputs affect legal or contractual decisions.

Governance

Key issues to reflect in a contract include:

  •  A mechanism to log information about the way in which the AI system operates.
  • Audit rights.
  • An ongoing mechanism whereby the supplier ensures that it proactively searches for flaws and vulnerabilities in the AI system that adopt a hacker’s mindset and methods.
  • An ongoing mechanism whereby the supplier searches for and corrects any instances of bias in its AI system

Liability allocation

Liability issues are set to arise between the developers and users of AI Systems, particularly where AI systems may produce misleading, inaccurate or biased outputs, or even generate entirely fabricated content (so called “hallucinations”) that appear factually plausible but are false.

In all of this, the suppliers of AI systems and services will be seeking to limit their liability to sums relating to the cost of the AI system with exclusions of liability as regards economic losses arising out of the use of the AI system.

However, the question of who will be liable for issues arising out of the use of AI systems where human oversight is limited remains unsettled. In all of this, the core challenge will be to determine whether a harm resulted from a defective AI system (developer liability) or negligent use of the AI system (user/operator liability). 

For example:

  • Professionals using AI must ensure systems are suitable, reliable, and used within their intended scope, verifying outputs to avoid personal liability.
  • AI developers and providers are likely to be held liable if their AI systems are deemed defective (e.g., in safety-critical areas), particularly under new EU rules.
  • Those who use AI systems are likely to be held responsible for actions taken by their employees who improperly rely on or misuse AI tools.
  • Those who use AI systems that improperly process personal data in ways that violate any applicable data protection laws may be liable for to harmed data users and for regulatory fines.
  • Those who develop or use AI systems that produce biased results are likely to be liable for those outcomes. 

Conclusion

AI has such a multiplicity of uses. In all of this, it is hard to see how those who sell AI systems will be able to avoid responsibility for how those systems work whilst those that use such systems need to take responsibility for how they use such systems.

All of this is a fast developing area and we can help you navigate a way through the complex issues that arise.


[1] Getty Images (US) Inc (and others) v Stability AI Limited | Insights | Squire Patton Boggs



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *