How does the updated OECD AI definition differentiate AI systems from merely algorithmic software?

Functionality, autonomy, adaptiveness, and impact

S. Kate Conroy
5 min readMar 6, 2024

In 2023 the OECD updated their definition of AI system:

Cleaned up, the updated OECD definition of AI is:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment [Italics added]

The change has caused a degree of contestation including experts who question the sufficiency of the definition to differentiate between AI and other systems [1].

The OECD has released a memorandum explaining the updated definition [.pdf].

OECD (2024), “Explanatory memorandum on the updated OECD definition of an AI system”, OECD Artificial Intelligence Papers, No. 8, OECD Publishing, Paris, https://doi.org/10.1787/623da898-en.

An enticing blog post suggests that the new information will help disambiguate AI and non-AI systems.

Unfortunately neither the blog post nor memorandum explains in plain language how the definition achieves this goal.

Take the memorandum’s section on ‘inference’:

The concept of “inference” generally refers to the step in which a system generates an output from its inputs, typically after deployment. When performed during the build phase, inference, in this sense, is often used to evaluate a version of a model, particularly in the machine learning context. In the context of this explanatory memorandum, “infer how to generate outputs” should be understood as also referring to the build phase of the AI system, in which a model is derived from inputs/data

This paragraph tells us that inference is a step, but not what inference is [2]. At the crux of the issue is the sense that even simple functions infer, so AI could be almost anything that instantiates a function. But let’s consider key aspects in the definition [3]:

  1. Machine-based System: Common to both AI and non-AI systems. Machine-based systems could also be mechanical devices so this attribute is not distinctive.
  2. Inference from Input: The argument is meant to be that unlike simple algorithmic systems that operate on predefined, rule-based logic without deviation, AI systems infer outcomes based on the input they receive. This means AI systems analyse the data, learn patterns, or apply pre-trained models to generate outputs. This is meant to be a key distinction where AI differs from basic algorithms that execute static instructions without variation. Of course inference in philosophical logic and mathematical proofs can be quite deterministic. So what type of inference is under consideration? Modern AI systems using machine learning and Bayesian techniques are built on statistical inference. AI systems analyse data and make predictions, decisions, or generate insights based on the statistical properties of the data.
  3. Explicit or Implicit Objectives: AI systems are designed with specific goals in mind, which can be explicitly defined (e.g., classify images) or implicitly learned (e.g., optimising performance based on user interactions). This contrasts with algorithmic software that performs programmed tasks without optimising towards a goal beyond its immediate function. A good example of this is AI systems using unsupervised learning or certain reinforcement learning techniques, can develop strategies or solutions to problems based on patterns in the data, even if a specific outcome isn’t explicitly programmed. For instance, an AI could infer that clustering similar data points together might be beneficial for the task at hand, even if it was not explicitly instructed to do so.
  4. Generation of Outputs: The outputs of AI systems (predictions, content, recommendations, decisions) are typically dynamic and can change based on new data or learned insights. This adaptability and the nature of the outputs, which can directly influence physical or virtual environments, set AI systems apart from conventional software that produces static or predictable outputs.
  5. Levels of Autonomy and Adaptiveness after Deployment: This aspect highlights that AI systems can operate with varying degrees of independence from human intervention and can adapt their behavior over time through learning or updating mechanisms. In contrast, traditional algorithmic software requires manual updates or changes to alter its functionality.

Can we now evaluate a piece of software to determine if it’s AI?

  • Does the software statistically infer outputs based on the analysis of input data? If it only follows a static set of rules without learning or adapting, it’s likely not AI.
  • Can the software pursue explicit or implicit objectives by analysing data and learning from it? If its operations are not goal-oriented or capable of optimisation, it might not be AI.
  • Does the software exhibit autonomy or the ability to adapt post-deployment? If there’s no capacity for independent operation or adaptation based on new information without human intervention, it might not be AI.

So statistical inference is really important to AI systems. Can AI systems use other sorts of inference, eg analogical reasoning, causal reasoning, critical reasoning, counterfactual reasoning and intuitive reasoning? Well, interestingly, LLMs can mimic these using statistical processes over vast data sets.

By keeping the simple and ambiguous word ‘infer’ in the definition, plus requiring explicit and implicit objectives, the OECD is likely to capture all AI systems currently deployed or under development. For those stuck on non-statistical, deterministic inference, then non-AI systems will also qualify.

References

[1] Professor Lyria Bennett Moses expressed concerns with the updated OECD AI definition in ‘What is the AI Management System Standard ISO/IEC 42001:2023?’ recording of the presentation by the National AI Centre’s (NAIC) Responsible AI Network in collaboration with Standards Australia, published 4 Mar 2024

[2] The reference provided in this paragraph links to a report that does not provide any further clarification on what inference means in the definition, see OECD (2019), “Scoping the OECD AI principles: Deliberations of the Expert Group on Artificial Intelligence at the OECD (AIGO)”, OECD Digital Economy Papers, №291, OECD Publishing, Paris, https://doi.org/10.1787/d62f618a-en

[3] To help me consider this section I asked ChatGPT 4.0 some questions about the definition including:

Human prompt: You are a global AI expert and policy writer. You have AI technical proficiency and expert experience writing international policy and standards. Please analyse the following definition and explain how this definition allows someone to decide if a particular piece of software is AI or just algorithmic <insert OECD definition of AI>

TBH, the answers of GPT were not that great in the first instance, but I dug into definitions of inference, drew on my cognitive science knowledge and background and poked around enough to finish this brief article. :)

--

--

S. Kate Conroy

Epistemology, cognitive science, decision support, human-autonomy teaming