BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News The Challenges of Producing Quality Code When Using AI-Based Generalistic Models

The Challenges of Producing Quality Code When Using AI-Based Generalistic Models

Using AI with generalistic models to do very specific things like generating code can cause problems. Producing code with AI is like using code from someone else who you don’t know which may not match your standards and quality. Creating specialised or dedicated models can be a way out.

Luise Freese and Iona Varga spoke about practical dilemmas regarding AI models and ethics at NDC Oslo 2023.

Artificial intelligence hints towards a sense of actual intelligence, while in practice it’s actually the representation of how these models are built that is giving it its name, Varga mentioned. By connecting nodes, we hope to mimic the neurons and synapses in the brain, and since this resembles the network in our brain, we call it artificial networks or intelligence, she said.

Freese added that taking an abstract look, computers rely solely on transistors, in a configuration where they are either on or off. By making combinations of these, you can manipulate bits. Transistors don’t entangle with each other; they are simply a bunch of switches leading to an outcome:

Computers therefore do not think, however, it is in our AI algorithms that we give them personality traits like being polite and saying things like "let me think about that". AI is just using statistics to predict, classify, or combine things.

The problem with AI, Varga mentioned, is when we take very generalistic models, or Foundational Models, to do very specific things. The way Large Language Models (LLM’s) work is by analysing a question and creating a few words, and based on statistics it will predict what the best match would be for the next token, she said. It cannot fact-check itself, since it’s designed to generate, not validate, she added.

By trying to build one AI model that is going to solve every AI problem we encounter, we’re starting to create a downward spiral that’s self-amplifying, Freese said. If we want to reach an upward spiral we should be using less foundational models, and start using more specialised models, and maybe some of these are actually built on top of these foundational models, she added.

AI may produce code, but is it safe to use, and does it match my standards and quality? These questions can only be answered by an actual human, and this process should not be underestimated, Varga said. In the end, it’s like writing code; code from someone else you haven’t seen before is harder to debug than some codebase you were involved in from the beginning, Freese concluded.

General models have a general understanding, which can lead to problems when generating code, as Varga explained:

Whether something is for example React v17 or v16 isn’t directly in a model’s context, yet it knows about the codebase. Maybe you get into a situation where it is trying to create a function for you, but it combines both versions into one code.

Varga mentioned that AI would be, in many cases, a great starter for solving problems. However, once you start with AI, you need to check, validate, modify, edit, and rewrite stuff, and this part is where we might underestimate the work that AI is bringing.

InfoQ interviewed Luise Freese and Iona Varga about the challenges of artificial intelligence.

InfoQ: What can cause AI to fail?

Iona Varga: AI in general is not doomed to fail. I come from a background in Medical Physics, and I have seen plenty of great AI tools that performed shear wave elastography on an exceptional level, or detect babies in an early stage, or even the smallest nodes tied to lung cancer that no oncologist was able to spot.

Since we have fake data and twisted facts, the outcome is not always trustworthy. Think, for example, about Trump’s inauguration where the amount of people that showed up was less than originally communicated. Ask a model how busy the park was, and you might get a surprising answer. But also, the origination of data can have a historical background that is still today arguable, or reshaped to conform to the current political plays, standards, etc.

InfoQ: How can ethics help us to deal with the issues that we have with AI?

Luise Freese: Ethics as a tool itself will not do much. Ethics is only a way of working, like DevOps. Once you have a plan and know what you want to do, Ethics is sort of your definition of done. Is the data that I have used representing everything and everyone who should be included and is going to use my product? By using these sanity checks, your way of working will improve on accessibility, inclusivity, and trying to steer away from bias.

About the Author

Rate this Article

Adoption
Style

BT