What are the risks of artificial intelligence?

Tuesday, 01 August 2023

Ash Fontana
Managing Partner, Zetta
    Current

    The idea that boards can control the use of generative AI within a company is flawed, but if you can’t ban it, at least you can manage it, writes AI-focused venture capitalist Ash Fontana.


    Generative AI is a combination of many types of AI developed over the first half of the “AI-first century” from 1950–2000.

    Decades of research on statistical matching of sentences, breaking down pictures into pixels and distributing data across databases underlies what we see in something like ChatGPT.

    The cake that OpenAI baked has many layers and we can’t put the taste we experience down to any one ingredient. However, perhaps the main ingredient we taste in the experience of using ChatGPT is elegant product design.

    The latest generative AI apps — Bard, Stable Diffusion and ChatGPT — are so easy to use that hundreds of millions of people can now taste the potential of an AI-first approach to much of what we have to do in work and life.

    The fast automation, creation and investigation enabled by generative AI allows us to do more with less. However, the odd characteristic of generative AI models is that the output is actually “made up” rather than logically constructed.

    This makes the risk of using generative AI models in a corporate context both high and low. High, in that anyone who contributed to the training of a model could lay claim to the output of that model. Low, in that it’s often impossible to link the output back to a source. In the extreme, generative AI models are sometimes so creative that the output has no basis in reality — it’s a hallucination — making the risk of a claim on the output zero, but the risk of relying on that claim infinite.

    This risk dichotomy presented by generative AI can be partially resolved by separating the internal and external risks.

    Boards considering these risks may find it useful to take some of the preliminary steps below. Before we outline these steps, it would be wise to recognise something upfront — the notion that boards can police the use of generative AI within a company is fallacious because most generative AI models are trained on so much real-world data that their output is highly believable, thus practically undetectable. If you can’t ban it, at least you can manage it.

    External risks are essentially those that revolve around someone having an intellectual property claim on whatever is produced by generative AI, whether that’s content, code or copy. These risks exist for both companies using generative AI to produce something to sell, and those consuming the output of another company’s product based on generative AI.

    Effective management

    Companies using generative AI to produce something to sell should comply with the relevant laws around fair use or licensing of training data. They should properly trace the provenance of that data and consider that using general models is less risky than using specific models. This is because general models are trained on broader datasets — thus reducing the risk of generating output that’s near identical to an existing piece of content.

    These licensing, provenance and generality points will be useful when talking to potential customers, insurers and claimants. The provenance point is perhaps the easiest and most effective to action because properly tagging the source of training data means it can then be linked to an owner.

    The owner can then be properly compensated if the model actually generates income. However, companies using generative AI to produce something to sell should not expect to own anything produced with the help of generative AI because the legal guidance across major jurisdictions is currently unclear.

    Companies consuming the output of another company’s product that’s based on generative AI could ask that company about the sources of data used to train the models underlying the product (looking out for protected content), double-check that all protected content was properly licensed or fairly used according to the relevant law, and consider adding protections in their vendor contracts that confirm the above (or even require the vendor to indemnify the user).

    Internal risks are essentially those that revolve around output quality. Briefly, the risks here are somewhat cultural in the sense that those using generative AI as part of their workflow could be submitting work to colleagues that is low-quality or hard to understand.

    This could increase internal communication friction, create the need to redo work downstream and raise questions about who could easily be replaced by AI “workers”.

    My advice for those considering how to use generative AI is essentially the same as in my book The AI-First Company, about using any form of AI. A “lean AI” approach is the safest, most capital-efficient way to build an AI-first company. Constrained experimentation on separate infrastructure not only allows adequate understanding of the output, but also provides certainty around data provenance, fast iteration and greater understanding of customer needs.

    Ash Fontana is an early stage investor focused on AI and author of The AI-First Company: How to Compete and Win with Artificial Intelligence (Penguin). Watch the AICD video Building an AI-first company: A director’s guide here

    This article first appeared under the headline 'The risk dichotomy’ in the August 2023 issue of Company Director magazine.

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.